« The Epistemology of Anshe | Main | Tringo TV?! »

Apr 22, 2006

Comments

1.

You might want to back up and look at the more general problem which is what happens when you allow hostile -- or at least untrusted -- code to execute on your machine. This is the problem facing SL scripting language but it is also deeply relevant to the web, web 2.0-style applications, virtualization -- in the language/hardware not virtual world sense -- and grid computing.

Talking about them in anthropomorphic terms just obscures the underlying problems. Whether the executed code is fancy-schmancy AI's remotely communicating with each other to exploit a physics bug or 12 opcodes using a buffer overrun doesn't really change the problem other than *maybe* expanding the design space available to the attacker.

2.

Interesting that you cast this as a "mammalian" evolution, Nate, and thus maybe with some inherent teleological positive valence as an inevitable step forward. These developments seems more virus-like to me, with a number of external scripters acting in the place of undirected mutation and selection mechanisms, injecting primitive genetic/script material into the cells of SL. Cory's comment about "the more general problem which is what happens when you allow hostile -- or at least untrusted -- code to execute on your machine" echoes the biological virus-like action of user-scripted content. So maybe this will "evolve" into something less parasitic... or maybe it just kills the host.

But really, is this anything new? Or just the latest in a possibly accelerating train of events? From Randy Farmer's teleportation gun to this, the damaging scripts appear to be getting more sophisticated. Is this an isolated event (supposedly several "attacks" have happened over the past few days), or does the prevalence of script-griefing scale faster than the number of active SL users?

In my mind, the ultimate question is, does this sort of malicious grief-content have the potential to overrun the perceived benefit of the world (especially in the eyes of more casual users), or does it fade into the experential background to be accepted as a toxic but unavoidable aspect of daily life in SL -- much like traffic jams, infomercials or rolling blackouts in many parts of the physical world? Is the "freedom" for scripters to damage others' experience of the world (or prevent them from experiencing it altogether) worth more in some final analysis than the inviolability of experience that people also desire (and are paying for)? Thus far other commercial game/worlds have decided in favor of security over unbridled freedom. Second Life is betting -- ultimately betting the world -- the other way.

3.

The actual risk here is that SL may eventually have to have greatly rein in what is possible with their scripts to maintain some form of stability, thus undercutting one of the neatest reasons for being in SL. It only takes one griefer to disrupt the world for everyone for hours; people will put up with that a certain number of times, then go away.

As SL gets more popular, they'll see more and more of this. Griefers like an audience.

4.

Well, I seen this happening before - in Blaxxun-based VRML worlds like Cybertown - an avatar with the "bomb" that zaps anyone in the radius of the "virtual explosion" out of the system... Yet, guess what? - The worlds are still there - and those "VR-bombers" have somehow vanished...- maybe to SL?

Consequences? - Clearly there is need for better security...

Which makes me think of a little sci-fi scenario-comics of mine: a VR WORKSPACE with Humpty-Dumpty avatars used by a corporation to squeeze max out of the suppressed "VR-workers"...:
"Poweru Haipu"

BTW - there is a working prototype of similar system I am intending to roll out soon...

;) Alex

PS: My site is a construction ruin at the moment - sorry for the mess for those visiting

5.

Are we calling them "griefjects" yet?

6.

Cory>
Talking about them in anthropomorphic terms just obscures the underlying problems. Whether the executed code is fancy-schmancy AI's remotely communicating with each other to exploit a physics bug or 12 opcodes using a buffer overrun doesn't really change the problem other than *maybe* expanding the design space available to the attacker.

-----

In spite of my posted mammalian flourish I almost agree with you. The one difference that seems worth highlighting (and in fun, bioligizing :) is the reason why these sorts of problems seem to me the more vexing (in the [future?] general case) than the far richer set (and eternal) examples of, say, browser (e.g. Internet Explorer) holes. Namely it comes down to the ability of these objects/scripts to legitimately exist in some social contexts and then cross some line and become illegitimate. That transition will likely be difficult to enforce when in software.

Trashing of the commons (e.g. rampant replication) is unambiguous (virus!). "Exploiting" (per some EULA no-no) the design space, sure. What about, however, legitimate acts applied illegitimately. Tossing avatars - only a few special cases is okay? Which ones?

The problem is magnified by these objects/scripts having to co-exist in a social setting: what about a device that is mostly helpful but can grief through some occasional excess?

7.

Right,- imagine security gates scanning every avatar for viruses - fun!

8.

Obviously SL needs to fix the bug in its first law of thermodynamics... I don't think this problem stems from trusted vs untrusted code. It's purely an economic issue (energy or Linden dollars -- they're the same thing).

9.

Hey... I like infomercials... leave Oxyclean and the Mad Australian out of this...

10.

Nate,
I'm also in the "almost agree" category -- hence my "maybe" to leave both of us some wiggle room. However, in terms of talking about the shades of gray related to hostile activity, plenty of boring code (say the smooth transitions between spyware, adware, malware and beneficial code like crash reporters) that exhibits similar properties.

Jessica and others,
The solutions are better local controls, better permissions, and better tools -- not nerfing.

11.

Whether it has been real-estate, horse armor, or swords-of-uberness, virtual assets in most worlds have not had to contend with the headaches of Second Life: objects there can come alive in unique (user-created) ways.

Volatile objects in SL are unique, whereas volatile objects in the standard MMOG are merely instances generated from an immutable factory template. This has deep ramifications from a control, permission, and monitoring perspective. In MMOGs these sentinel processes can be external to the primary design model because the range of behavior for every object of a type is constrained. Monitoring one such object is equivalent to monitoring all such objects. In a domain where all objects are mutable and unique, sentinel processes probably must be inherent in the basic world design.

In other words, I think Ken Fox has it exactly right. A prerequisite to monitoring and policing objects in such a world (without "nerfing" the basic design intent) is to impose realistic economic and physical constraints. Even a biological virus cannot replicate beyond its theoretical resource limits. The problem with these analogies is that the theoretical resource limits for digital viruses are the system itself, not the virtual world which happens to be running on the system.

12.

randolfe>

The problem with these analogies is that the theoretical resource limits for digital viruses are the system itself, not the virtual world which happens to be running on the system.

Sounds like it might be worth distinguishing between two types of objects+script problems:

1.) those which can penetrate the virtual world sandbox into the underlying system/platform;

2.) those which can operate within the confines of the virtual world sandbox but still do bad things (grief).

Potentially one can have cases spanning both categories, as perhaps might have been in this case (per ken's point): e.g. unsanctioned thermodynamics + replication.

Yet to extend what ken seems to suggest - type 1s need to be fixed/detected in the traditional way we do this (plug holes, detect via patterns ala anti-virals). Type 2s given their more open-ended nature need to be systematically constrained by the virtual world mechanism (per randolfe - "realistic economic and physical constraints").

Fair?

13.

"You might want to back up and look at the more general problem which is what happens when you allow hostile -- or at least untrusted -- code to execute on your machine."

Or we might want to back up and look at the more general problem which is what happens when you host all of your users and content on a single gridded world.

The comments to this entry are closed.