« Oh the Kids These Days | Main | The Spice Must Be Litigated »

Jul 11, 2004

Comments

1.

Too bad the Whorf-Sapir hypothesis has been debunked about as completely as anything can be.

I mean there are some platitutdes we can all agree to: staring at a screen all day will affect your perception of things if only by giving you eye strain and making you light headed when you stand up. It might change way you think in that you are thinking about what you saw. It can keep information from you and it can bring information to you that you wouldn't otherwise get, and there may be models that would bring you better information or at least better packaged information, but...

The whorf-sapir hypothesis was about how language shapes the fundamental nature of thought and reality (they blurred this distinction). I don't see any of that here. There is a big difference between an interface that denies you information and one that shapes the fundamental structure of reality. A little care please?

sign me Cranky

2.

Nate knows this, but for the benefit of others, Peter is a professor of linguistics when he's not doing the Herald and editing books on cyberculture.

http://www-personal.umich.edu/~ludlow/

So I'll let Nate defend himself on the utility of the Sapir-Whorf hypothesis (I always heard it as S-W, not W-S), but I think the big difference here would be that VWs are not merely semiotic conventions for conveying information about reality (like language), but, like art, they are information in and about themselves. The signifiers in VWs are the signified, more or less.

Of course, there are chat windows, fan faires, and all sorts of role-play/conversation elements that make VWs something more like cultural languages and less like art (or games).

3.

I'll start the outline of a set of defending thoughts... about to skip to Maine, will revisit/clarify tonight:

1. Yes, S-W hypothesis has been very controversial (and politicized) and in its extreme form: "debunked."

2. The weaker versions of the hypothesis appear to be enjoying resurgent interest recently.

3. Post above, I assume (2.)

4. I am intrigued, however, by the possibility of a stronger position for VWs (running counter to the trend above?) (Lev Manovich, http://a-s.clayton.edu/spence/courses/engl3901/manovich-short.pdf):

Whorf-Sapir hypothesis is an extreme expression of “non-transparency of the code” idea; usually it is formulated in a less extreme form. But then we think about the case of human-computer interface, applying a “strong” version of this idea makes sense. The interface shapes how the computer user conceives the computer itself. It also determines how users think of any media object accessed via a computer. Stripping different media of their original distinctions, the interface imposes its own logic on them. Finally, by organizing computer data in particular ways, the interface provides distinct models of the world. For instance, a hierarchical file system assumes that the world can be organized in a logical multi-level hierarchy. In contrast, a hypertext model of the World Wide Web models the world as a non-hierarchical system ruled by metonymy. In short, far from being a transparent window into the data inside a computer, the interface bring with it strong messages of its own.

I think greglas' point would underlie this stronger claim for VWs, should it rest:

VWs are not merely semiotic conventions for conveying information about reality (like language), but, like art, they are information in and about themselves. The signifiers in VWs are the signified,
4.

Peter,
Not to wildly hijack this thread, but is the debunking of the strong version of S-W debunk the idea that all though is constrained by lanugage or the weaker language influences thought ideas? Given that the entire advertising industry is built around the idea that language (like images) can influence behavior (and presumably thought) through repitition, color me confused if that idea is debunked. Also, it seems to me that there have been a bazillion experiments on using language to influence to the perception of other ideas. Finally, per the Dennett/Dawkins/Blackmore camp, there would appear to be pretty decent physiological arguments for strongly linking language and thought. In fact, didn't Chomsky also argue for weak linkages while arguing against S-W?

Anyway, leaving that aside, of course a game or digital world UI constrains the user's ability to make choices in the world. On the game side, that's often the game! And certainly 1st person versus 3rd person perspectives change how strongly you identify with the character you're playing (and how likely you'll be to purchase the action figure) but I'm with Peter on it being a little odd to imply that the interface limitations are linguistic or semiotic ones.

5.

Well we can talk about a "weaker version" but then it isn't the Whorf-Sapir hypothesis -- its just the obvious fact that linguistically encoded information affects what we think and the way we think. Whoever doubted this? I mean I do use language when I teach philosophy, after all. Its very handy for that. I also understand it's hard to win arguments and change peoples's minds without language. (Never tried it but that's what I hear.)

But now trying to be less kvetchy and more helpful, I suppose a case could be made that computer interfaces are like languages in that they are conduits for the flow of information, but I seriously doubt that language is really fundamentally about communication, and I also don't think of languages as static objects; I think of languages as objects that multiple users construct on the fly in particular contexts of use (modulo some fixed skeleton of principles and rules). Computer interfaces are a long way from being anything like this still.

6.

Peter Ludlow >
I seriously doubt that language is fundamentally about communication

*blink blink* Eh, what!? Far be it from me to take issue with a professor's definition, but that SEEMS completely wrong.

languages as static objects; ...languages as objects that multiple users construct on the fly in particular contexts of use (modulo some fixed skeleton of principles and rules)

What linguistic training I've had agrees with this reasonably.

Insofar as computer interfaces are like this... the fastest example I can point you to is desktop design for Linux users with KDE and Gnome. Many individual programs provide skins, and allow their users to create and apply their own skins. I don't know much about the process of this activity, admittedly, but the only differences between what you said and what I know is that it's not "on the fly".

However, I know that C# .NET framework and Java Beans and Visual Basic are essentially point-and-click programming, where you grab modules and put them in different places. Fast-forward a few years into the future, thre's no reason to think this can't be applied to standard programs, nevermind that the general public won't know how to do it, still.

7.

Here's a link that demonstrates the influence of language over thought:

F3ll0wsh1p of teh R1ng .

Richard

8.

Human languages are optimal for communication in pretty much the same way that noses and ears are optimal for keeping glasses on your head. Noses and ears can be used for that, but they aren't designed for that. Fortunately we are really good at using just about anything at our disposal for purposes of communcation -- even language.

The thing about communication, though, is that between humans it typically involves lots of user modelling by all participants, assumptions about common knowledge, and on and on. There is some work on designing computer interfaces with this capacity, but what I've seen is pretty crude. Still, once you get it, integrate this with a modular programming environment like visual basic and you would really have something.

You know, Don Hopkins of the Sims transmogrifier (sp?) fame has a lot to say about this. He had ideas about an interface to TSO that would be a higher order programming language that would not only allow users to design custom content but serve as a kind of desktop to RL applications as well. I think the idea is that it would be integrated into the game itself, so that the game would be part game, part programming environment, and part desktop. That would certainly expand some minds.

9.

Oh, someone has to…


L0rd of teh Ringz0r
p4rt 4

TN has logged on as admin
TN: "Pwned!"

10.

Peter Ludlow> You know, Don Hopkins of the Sims transmogrifier (sp?) fame has a lot to say about this. He had ideas about an interface to TSO that would be a higher order programming language that would not only allow users to design custom content but serve as a kind of desktop to RL applications as well. I think the idea is that it would be integrated into the game itself, so that the game would be part game, part programming environment, and part desktop. That would certainly expand some minds.

I'll say! Sounds very intriguing - will be Googling this one later today.

11.

>> Human languages are optimal for communication in pretty much the same way that noses and ears are optimal for keeping glasses on your head.

Well, not "optimal" maybe, but plain ole "functional" can certainly have some value here.

And, in the context of Sapir-Whorf above, there may be a confusion between human language and human language capacity, which works as optimally as possible according to nature's maxim of 'if it ain't broke, have lots of babies with it.' Lots of little language users running around.

I do like the Sapir-Whorf framework for discussion because it is useful to consider the impact (or lack thereof) of virtual world "immersion."

A simple argument:

Given two systems of code, one in the human body and one in the digital medium, both which can be expected to interact with and, over time, adapt to one another, which system is most likely to show the greatest amount of adaptive change over time? Obviously, it seems to me, the system which is more flexible and capable of change will adapt more quickly and more radically to its environment than will the system which is less flexibile and less capable of change. Thus, the code associated with digital media form, particularly digital media *aesthetic* form, is more likely to adapt to the code associated with human interpretive code -- rather than vice versa.

Now perhaps you have to assume here that there IS a human interpretive code (the anti-tabla rasa position).

If you do, then the same argument might well be applied to social and cultural "codes." These, too, seem more obviously amenable to sudden change and variation than do the embedded biological codes governing human cognition. Thus, according to the same logic as above, social and cultural rules and systems are more likely to reflect adaptations to human cognitive properties than vice versa.

You can take the player out of the virtual world, but you can't take the virtual world out of the player.

12.

Perhaps I'm not reading closely enough, but I am seeing a largish gap in the center of this argument. Nathan's point here is that interface shapes human perception of virtual worlds, right? I see him comparing it with cinematic "language," so I am presuming he is making no implications about computers limiting possibility at a level any deeper than interface.

I figure y'all must be familiar with the much, much stronger position which computer programmers, especially those in the open source movement, employ routinely: Computer code itself is language which enables actions and brings "objects" and "spaces" into being (including the interface). And this language absolutely does limit what can and cannot be enacted in virtual space. My understanding is that object-oriented programming languages allow for greater speed and efficiency in processing code than their non-OO predecessors. Proprietary code limits users' ability to alter their virtual environments and characters, an issue which is going to be important for the study of MMORPGs as more and more legal issues on the subject arise.

The formative property of computer language is not an issue to take at interface value alone. (This has previously been addressed perhaps more knowledgeably by my friend Roger Bellin: http://alum.hampshire.edu/~rb97/archives/programminglanguagerelativ.html )

13.

Gus Andrews>And this language absolutely does limit what can and cannot be enacted in virtual space.

What the language is used to say limits what can and cannot be enacted in virtual space. In practice, pretty well all programming languages have the expressive power of a universal Turing machine (which is to say they can, in theory, be used to write anything that can possibly be programmed).

>My understanding is that object-oriented programming languages allow for greater speed and efficiency in processing code than their non-OO predecessors.

They don't allow for greater speed and efficiency in processing code. Hand-optimised assembler will always beat compiler-optimised object code for speed, if the programmer is given enough time. OO programming makes the time taken to write and maintain code easier for certain kinds of application.

I'd agree that different programming languages force programmers to think in different ways. For example, the language Pascal was deliberately designed with a type system to force progammers to think about types (the argument used at the time was along the lines of Orwell's 1984 Newspeak, ie. if you don't have words for a concept then you can't express it: programmers who grew up with Pascal wouldn't have bad programming practices in their vocabulary, therefore wouldn't make those mistakes). This was fine when Pascal was used as a teaching language, but bad when it was used afterwards because competent programmers knew exactly what typing was already and didn't like being straitjacketed into following a typing system that wouldn't let them break its rules deliberately (as C does with casting).

If you use an OO programming language, you are obliged have to follow the OO paradigm. This means that you're forced to organise your routines along OO-friendly lines, which may lead to faster execution than if you'd written them in a non-OO form and devoted the same amount of time to making your code run-time efficient. On the other hand, it could be unnecessary or inappropriate for the task, and you could perhaps have written something in less time that runs quicker if you'd stuck with a non-OO language.

Where I would agree that OO programming limits virtual worlds is the case where people associate virtual objects with OO objects in a language that has only single inheritance (eg. C++). This makes some otherwise expressible concepts inexpressible without a design change.

Richard

14.

Gus> Proprietary code limits users' ability to alter their virtual environments and characters, an issue which is going to be important for the study of MMORPGs as more and more legal issues on the subject arise.

Using Visual Studio in no meaningful way restricts your ability to create compared to gcc. In the same way, an MMO that is built around allowing users to create their world doesn't have to be open source. Of far greater importance to ultimate innovation in the space is how free and open the collective knowledge of the residents is.

Richard> Where I would agree that OO programming limits virtual worlds is the case where people associate virtual objects with OO objects in a language that has only single inheritance (eg. C++). This makes some otherwise expressible concepts inexpressible without a design change.

But this isn't the language limiting the world, it's the poor design decision. I would agree, however, if you said that OO may offer more opportunity to shoot yourself in the foot.

15.

Richard> "Where I would agree that OO programming limits virtual worlds is the case where people associate virtual objects with OO objects in a language that has only single inheritance (eg. C++)."

Agreed. The biggest danger of OO is for people to apply it thinking in terms of real world analogies. I blame the OO pundits that used such analogies to claim that OO was superior, despite the fact that they are the cases most likely to cause problems.

To deal with Cory's objection: It is not just a matter of more ways to shoot yourself in the foot. It is also a matter of "If all I have is a hammer, everything looks like a nail." One's approach to a problem is strongly influenced by the tools one has available.

Another field to look at this in would be mathematics. Mathematics is all about defining your own language to solve a problem. The history of mathematics is filled with complex problems that became easy once the proper language was found. What is interesting is that people who *lacked* the proper language could *create* the language required. (Very smart people, admittedly :>)

- Brask Mumei

16.

Brask> One's approach to a problem is strongly influenced by the tools one has available.

Maybe, but I think one of the things that separates the good developers (whether programmers or artists or designers or whatever) is the ability to select the best tool for the best job. Or, to your second point, to write/borrow/buy a better tool as needed.

17.

Cory Ondrejka>But this isn't the language limiting the world, it's the poor design decision. I would agree, however, if you said that OO may offer more opportunity to shoot yourself in the foot.

You're right, it is a poor design decision. i think what we (and Brask) are moving towards here is saying that programming language choice is itself a design decision. An interesting question here is the extent to which this design decision (which is made by the lead programmer) should impact on the virtual world's design. A programmer shouldn't be able to say "you can't have multiple inheritance because C++ doesn't have multiple inheritance", but should be able to say "you can't have a mind-reading device because the program has no access to people's minds" or "you can have the degree of AI you want, but it will cost this much and run this slowly".

Richard

18.

Brask's point about the language of mathematics reminded me... We ordinarily think of the calculus as a fixed language in which the development of classical physics took place, but work by Philip Kitcher and others has shown that it too was a very dynamic framework all throught the development of classical physics (at least on the continent), with changes to the calculus and attempts at partial rigorization of it ususally coming about in response to specific needs in the physical sciences. What's the moral? Linguistic frameworks, logical frameworks, mathematical frameworks, and programming environments are all very plastic and can be modified in response to our needs. They only become restrictive if we let them -- e.g. by becoming slavish to a particular formalism. But in that case its our fault for being narrow and uninventive -- its not the fault of the formalism/framework.

19.

I recently co-authored a paper on the bias in the games industry towards using "scripting" vs. "rules" based languages in games AI (http://www.roaringshrimp.com/WS04-04NCombs.pdf).

While on the one hand there could be a case to be made that imperative languges are more compatible with a "cinematic" vision of AI behavior than declarative languages (directed vs. emergent).

But on the other hand it feels complicated to me to extend the analogy to how widgets in the interface are programmed: e.g., whether a dockable menu bar is coded in C,C++, or Smalltalk. The programming language details seems less important than the end-state: Microsoft GUI conventions.

I appreciate that conventions were likely designed under influence (or constraint) of lower-level design/language assumptions, e.g. Microsoft and C++ (or .NET). I wonder where the Microsoft desktop might have ended up if from the earliest days there was a corporate decision to program everything in Prolog? Perhaps exactly where they are now, perhaps not.

Related, I am sort of intrigued by how WWW conventions seem to be creeping into MMOG interfaces, e.g. in-game browsers and virtualized websites in-game. etc.

In any case, picking and choosing the right set (and granularity) of codes (e.g. cultural) seems important but hard.

20.

Nathan> I am sort of intrigued by how WWW conventions seem to be creeping into MMOG interfaces, e.g. in-game browsers and virtualized websites in-game. etc.

Especially interesting in light of parallel cultural creep: emoticons, IRC customs, text messaging shorthand, etc., I think ^_^ Do the interfaces facilitate or encourage the transfer of particular conventions from other electronic media?

The comments to this entry are closed.