« Online, A First Kiss | Main | Patent nonsense »

Nov 10, 2003

Comments

1.

Nick> "So we could implement non-zero-sum gaze very easily (where avatar A maintains eye contact with avatars B, C and D all at the same time). And we know from the psych literature that eye contact enhances learning, persuasion and attraction. So what if we tweaked eye contact in groups to be non-zero sum for 50% of the times when someone speaks? Could we enhance the overall trust in a world?"

My suspicion is that it might do the opposite. The avatar behavior discussed seems to have deception at its very heart. It seems a strange notion to me that deception could increase trust. Instead, I suspect that I'd begin to mistrust that people were being attentive because I'd know that the fact their avatar appeared to be looking at me meant nothing. In other words, I tend to think that eye contact enhances relationships because it is a reliable indication of attentiveness. If you take away that reliability, I'd question its usefulness.

As to the question of increasing the overall level of trust, I think the most important thing that designers of virtual worlds can do in this regard is to increase the incentives for being trustworthy. Anonymity, lack of character persistence, and lack of appropriate consequences all seem to conspire so that there is little reward for behaving in a manner that inspires trust.

Furthermore, it seems to me that convenience can often work at cross-purposes to building trust. Virtual worlds do not seem impervious to the insta-gratification trend in our culture, where relationships are often sacrificed in favor of accomodation. Players often react negatively to interdependence because it is inconvenient, but developers need to maintain a strong vision, recognizing that there is a cost to convenience as wel. Having said that, I believe there are some design alternatives that will better minimize inconvenience while maximizing relationships.

--Phin

2.

Fascinating ideas, Nick.

I think the There and Toontown folks are already on top of this one -- (and the There folks have Amy Jo Kim helping them out). If I'm not mistaken, the eye contact thing you are talking about is something that they are using in their chat circles.

Toontown makes all the avatars default happy, and structures the "chat" interface so that avatars are essentially only capable of saying nice, socially positive, things to each other. And they've grasped the cog psych insight that if you're instructed to smile and say nice things, forced behavior influences mood.

Phin has a good point. What we're talking about here is arguably a kind of technological eugenics of social interaction. But I think the philosophical goodness or badness of that kind of p2p interface tinkering is up for grabs.

3.

Greg: "Toontown makes all the avatars default happy, and structures the "chat" interface so that avatars are essentially only capable of saying nice, socially positive, things to each other."

Very Disney. And too scary.

4.

I don't really see how trust can be engineered. While certain kinds of verbal facial cues (eye contact etc.) might engender trust, it is my understanding that these work not because certain cues are inherently trust-inducing, but because they help to identify group insiders, and hence persons with shared sets of values. So, when you toast someone in Serbia you look them straight in the eye, when you do it in Minnesota you absolutely don't (and you distrust people who violate these behavioral norms). It might be more useful to allow users to develop little macros that govern how their avatars react in social settings. There is even a kind of grammar of facial features that one could deploy. Facial and verbal feedback could then serve as secret handshakes for the identification of persons that were sufficiently in-culture to be trusted. Here of course, one approaches the fine line between identifying and trusting those who are in-culture on the one hand and virtual bigotry on the other.

5.

Well, in a way it's not deception because it's not driven by the player. It's part of the mechanics of interaction. Like in SWG, your avatar nods and shakes their head if you include certain words in your chat message.

The other thing is that non-verbal manipulations are really hard to detect. If you tell them that it's happening, then yes you'll lower trust. But like real-world stealth-marketing, the whole point is that they don't know that it's going on.

And Re: Peter's point, yes, there are clearly local cues for trust, but given that our current virtual worlds are fairly homogenous (mostly Americans and Western Europeans), do we really need to worry about Serbians interacting with Minnesotans on a widespread level?

6.

So, the player's wouldn't manage it? Hmmm, it would be interesting to see if you could get the AI close enough so that the avatar acted in a socially acceptable manner.

For instance, it will seem strange if your avatar continues to stare at me while talking to someone else. It would also look unusual to have your avatar automatically turn to look at someone else when you talk to them, but then return your gaze to me. I'd begin to wonder why you keep staring at me in between each sentence in a conversation you are obviously holding with someone else. Are you talking about me behind my back? Making fun of me?

I'm not saying it isn't possible for an AI to do this, but I do think the socially appropriate nature of even something as simple as eye contact will be complex enough to present quite a challenge.

On the other hand, if the players manage this, then it will be rather difficult to keep them in the dark about what is going on.

But it is certainly a fascinating subject to consider.

--Phin

7.

Well, in a normal group situation, your "group say" messages are directed at everyone most of the times. If you wanted to say something to someone specifically, you would use "tell". (This is different in pure social worlds like "There", but is fairly accurate of the EQ/SWG type worlds.)

So a simple algorithm would make it such that when an avatar in your group speaks, 50-75% of the time, you would see that avatar maintaining eye contact with your avatar. So it doesn't always happen, but it happens much more often than it naturally would.

Phin, you're right that this is much tougher in a social world like "There" where many of the exchanges in a chat circle are person-directed, but most public exchanges in groups in the EQ/SWG-type worlds are group-directed so you won't run into that problem.

In a way, SWG has finessed this. When you target an avatar, your gaze shifts to them. So if you heal/trade/wave at an avatar, you will always have eye contact with them already (since you need to target them before performing those actions). This is not the case with the older MMORPGs where eye contact doesn't automatically occur when you interact with another avatar. So in a way, manipulated eye gaze is already occurring in the newer worlds.

8.

This is a fascinating question, and just one example of why I think research on the production of virtual worlds is as important as research on the social interaction within them.

I wanted to jump in on two connected points, specifically. First, much of the discussion here concerns how aware we would imagine participants are of the mechanics of gaze in a VW. Studies of social performance (that is, looking at everyday actions through the metaphor of theater) and theories of social practice (conceptualizing social action as having the emergent, improvised qualities of an ongoing game) quickly came to recognize that social actors are in a way quite aware of the edges of propriety and convention. This awareness may not be articulated, but through the practice of interaction individuals quickly get a sense of what they can get away with, and also quickly take this into account in judging the actions of others.

One way to think of it is that any instance of directed gaze makes a claim about trust that would then have to redeemed (or not redeemed) in the future actions of that player vis-a-vis the gazee. If a game is manipulating the ability to provide this gaze, then the future actions of its players would quickly, I think, reveal this behavior as not warranting trust equivalent to what it warrants in the flesh.

So my guess is that participants in general account for (and use) these design features of an environment, even those that are implicit. This does not mean that they therefore overcome their effects on social interaction. It is just that unintended consequences are quite likely in this dynamic relationship between participants' actions and the design of their world.

This connects to the issue of trust, of course, on this issue of time and the future. I think it's important to note how important the notion of the future is for the construction of trust. As Ted and others who study currency know well, it is a human institution that is peculiarly reliant on trust, and that is what makes the study of it in VWs a significant index (in my view) of their emergence as real social spaces.

9.

Tom said:

> "If a game is manipulating the ability to provide this gaze, then the future actions of its players would quickly, I think, reveal this behavior as not warranting trust equivalent to what it warrants in the flesh."

That seems right to me too, although I don't know about "quickly." I really don't think that the average player would "learn" sufficiently the new rules of the VW interactions to *fully* correct the mapping of RL and VW social cues. Certainly some of this will take place, but if we've learned anything, it is that social interactions in the two spheres (VW and RL) are hard to cleanly delineate within the minds of participants.

10.

I don't disagree. This is what I tried (in cumbersome terms) to point to when I noted that participants won't be able to overcome the effects of these design features (in your terms, correct for them). But to me it's just as important to recognize that the designers are not likely to create the effects they might intend in implementing them. To me, what marks a social world as real is just this kind of open-endedness, in VWs with results that neither the participants nor even the producers can anticipate.

11.

"If a game is manipulating the ability to provide this gaze, then the future actions of its players would quickly, I think, reveal this behavior as not warranting trust equivalent to what it warrants in the flesh. "

Why can't we say this manipulation leads to higher expectations on the gazee *as well as* higher commitment from the 'gazer'. In "There" the automatic chat system that shifts the camera as people start talking and tracks the chat balloons totally blew me away. I realized what was going on pretty quickly, but the feeling of having a "close" conversation, even if it was with a group of ten strangers seemed a lot warmer than any TSO, SWG, UO, Comic Chat or any other system. There was a feeling of instant closeness I haven't seen anywhere else.

Paul,

"Anonymity, lack of character persistence, and lack of appropriate consequences all seem to conspire so that there is little reward for behaving in a manner that inspires trust."

Well said.

12.

NICK YEE WROTE:
> So we could implement non-zero-sum gaze very easily (where avatar A maintains eye contact with avatars B, C and D all at the same time).

That, in my opinion, is a highly interesting suggestion for working on what I would call the emotional level of behaviour-shaping. Also, although the comment above about the trustworthiness of signals is important, some responses might well be so hard-wired into our brains that signals might work despite our intellectual scepticism.

If we were to build a model here, I'd rank the emotional level up there with the incentive level (the 'mechanics' in Nick's terms) and the aesthetic level where style, metaphor etc. are used to frame the interaction in desired ways (playing on people's conventionalism).

Somewhere above these levels (but overlapping with the incentive level in particular), I think we need a 'structural' level. At this level the designer decides what is possible and what isn't and also how things (objects or whatever) are placed in relation to each other. By careful attention to this level players/avatars may be inspired to interact in certain contexts and with a certain intensity. And it is of course a well-known fact that communication/interaction itself is quite conducive to trust.

Very thought-provoking post for sure, Nick.

- Jonas

13.

Re: Multi-gazing: A very intriguing idea.

However, I can't help think that such an action would become a disservice to the players. Automatically enforcing certain social actions will, IMO, lead to those social actions becoming meaningless. One often jokes that greetings are often formalized to the point of mere mechanics:
"Hello!"
"Hi, how are you doing?"
"Fine, and you?"
"Fine, thank you"
... and the two people wander off again.

The information content of this transaction appears to be zero. Surely we could simplify the process? Or automate it?

I think, however, the human social awareness is a more subtle beast than that. I think part and parcel of the "weighing" of the degree of social binding created by a transaction is one's estimation of how much cost was paid on the other side. After all, the real message is: "I am willing to expend energy to reaffirm our relationship". If no energy is expended, no reaffirmation occurs.

I am not so convinced our social bonding cues are as hardwired as we may think. In any case, any cues that come from a 2 dimmensional representation of a 3 dimmensional scene are, by definition, at a rather abstract level. I'd also consider the number of cues that evolve in the simplest interfaces. Look at behaviour on IRC, in email, over the phone, or in these MMORPGs, and you will find that you can often gain an instinctive level feeling for the other person by watching.

Automated systems such as SWGs emotes-on-text, cleaning of speech, and creating automatic eye contact would create a more friendly and intimate seeming environment. But then how do you separate the wolves from the sheep? I'm somewhat interested in playing Toon Town in order to look at how people build trust in that game.

In UO, at least in the "dread days", every encounter was a trust building exercise. I was very thankful that the system didn't auto-face people who were speaking to me. That it didn't auto-reply "Hi!" to my "Hail!". Especially that it let people pick their own names. Two blues who ignore your greeting in a dungeon? Time to get "recall" ready - they are likely on IRC and planning an attack.

The message behind the enforced-friendly faces is that you can trust everyone. We all know that to be false. I'd rather give people lots of rope so they hang themselves before anything really serious is on the line. Of course, if physically inducing people to smile truley leads them to being nice people, perhaps in such a world you *can* trust everyone?

- Brask Mumei

14.

Mike Steele & I have been conducting research of exactly the sort of thing Nick describes: Thank Yous, YWs and other bits of social "glue" are captured, logged and analyzed. Similarly, secure trade transactions are analyzed.

Couple this sort of analysis with physical proximity of avatars, direct tells, etc., and you can create social diagrams of a population. Once you understand how the network is connected, you can convert the social glue into cement, helping to retain players and create stronger community ties, as well as tracing pathologies in a population. These ties are absolutely crucial to the ongoing success of a persistent world and show you where the "trust" is.

I highly recommend Peter Monge's book on Network Communication Theory. It provides not only a great deal of background on the psychology of social interaction, but a common vocabulary we can use to discuss these sorts of issues.

The comments to this entry are closed.