« Did We Ignore the Rise of the Personal World? | Main | Tell us what you really think »

Apr 26, 2008

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c022953ef00e551fc98f38833

Listed below are links to weblogs that reference The Sexual Implications of Going Hands Free in Second Life:

Comments

1.

The term "handsfree" is kinda misleading here. While with 3D cameras we don't use keyboards and mice we use our hands, we use the whole bodies. I am not sure that this means easier masturbation. I guess that default setting (except for the few common actions (like walking and flying) will be that avatar is copying human's movements. So you can imagine the scene on the grid... Sure, chances are that very soon there will be overriders that will react on human's masturbation movements and turn them into animations that will move avatars into interaction and not solo fun.

But that is not the only problem. When I heard of 3D cameras, I had to ask http://metaverse.acidzen.org/2008/3d-cameras-for-good-and-for-bad>some questions. It is hard to imagine if avatars that follows our every move is convinient way to move in the metaverse? While we communicate in the virtual environment we don't want to have exact replica of our every move. We filter things we want our avatars to do. Otherwise, avatars would move their hands strangely when we answer phone calls or take a pen to write something down. On the other hand, we don't want to move our heads away from monitor just to make the avatar to look aside.

One thing that is very important for this particular question is whether are we so willing to make all the moves our avies do? I am not the only one that would feel more than silly (and certainly would lose all the cybersex mood) if I'd have to be alone in the room and do with my body all the things my avatar does.

It seems that the interface, i.e. software that will translate human's movements into avatar's will be the crucial part of the new technology. If translation come out as less disruptive than keyboard and mouse (which we are using for decades and are very familiar with) then we'll have new ways of fun. Otherwise, it would be just a silly experiment. attempt to make one on one translations is a sure way make a nice piece of hardware completely unusable. Most probably, success of the interface will depend on its customizatibility and openness to tweaking and experimentation.

2.

I think we have to come up with another version of "The Uncanny Valley" for this. Maybe... the Uncanny Nasty, or Unsexy Vally, or... or sumfin... I'll work on a better term.

Seems to me, that we have two ends of a spectrum when it comes to cyber sex:

1. Text only. We type "I do this to you, to myself, to the bedpost, the nerf ball, the aquarium, etc."

2. Full-on mind-to-mind connectivity. The Matrix, basically. Or at least something where when I think, "Slap and tickle," something happens to me visually and aurally that resembles "slap and tickle" without any other mediative device.

In between, we have the ascent towards more realistic cymber, and then the descent into the Unsexy Valley.

For example, we know that text is very sexy for some people, and that, for some, working with an animated character of really bad quality, poor execution, etc. might be less sexy than straight text. So the question becomes, what additions to the experience increase sexiness (which I'm using as a stand-in for "that which provides better cybersex"), vs. those that push you into the valley.

For example... if a device monitors where I turn my head in order to track where I look in a VW... that's an unadulterated (ahem) bad thing... unless you have surround video first. If I want to look at something behind my avie, and I look behind me in RL... well, I'm not looking at the screen any more. OK, we think... maybe it just tracks slight turns or twists of the head to engage a rotation of the view. If that's easier... OK. But for precise control, a mouse might still work better.

Same for body position. Moving forward, back, etc. may track well between RL and SL. But bending over vs. laying prone vs. laying supine vs. squatting? Yeesh.

I can see how certain improvements in UI might help; voice commands, embedded macros, etc. And how a hands-free system of overall control might help for some actions. I'm sure that there are interesting applications for teledildonics here, too, in how they interact with the environment.

But as long as the representation of what's happening on the screen isn't a 1-to-1 with what both people (all 3? and the zebra?) are doing in RL, you've got a disconnect that's potentially valley-headed.

I mean, I can type, "I breathe, gently, into your ear," pretty easily. That conveys a decent load of sensory information that would be very difficult to model with accessories.

Brain-to-brain sounds great. Some of the inbetween steps seem, well... less than inviting.

3.

Ohhhh lordy. Thought it'd been too long since there'd been a silly sex post on TN.

Seriously. You skip straight over the fact that we're near consumer level biometrics sensors on the gaming market (Like Andy pointed out. Emotiv should be out any day now, and I maintain an open source driver set for the Journey to Wild Divine Lightstone), and go for gesticuation on the full body level, something that hasn't been proven to be useful in many years of use?

Go to SIGGRAPH, watch someone with no dance/movement experience get in the mocap suit and screw around. It's not a pretty sight. Not to mention, there's no haptic feedback.

Go to an arcade with one of the video capture fighting game setups. See if anyone looks even /remotely/ comfortable using one.

Don't look quite so far into the future, lest we get caught up in just blabbering sci-fi. If I hear one more "OMG MINORITY REPORT INTERFACE" freakout, I'm gonna bludgeon someone with an IBM Model M. Hook to the internals, not the externals. Those are much more fluid (*rimshot*), much more stable, and much more useful than picking up outward physical actuations. That's what'll be important in terms of intimate interaction in the next couple of years.

Not that I know anything about that. :)

4.

As someone with lots of experience on the technical end, qDot, it's interesting to hear your input. Keep in mind though that everyone doesn't have as much day-to-day interaction with the tech surrounding these types of interfaces as you-- which is why it's important to open up this dialogue so people who approach the issue from different angles can discuss the various pros, cons, and realities of potential advancements.

Even if we never end up with one-to-one motion capture in a world like Second Life, it's interesting to think about how any change of interface impacts the way we have sex online. We translate sex slightly differently to each system we use, given the tools we have available. This system certainly has interesting (if apparently limited) potential.

5.

I've often wished I could walk and fly in Second Life with a better interface than the cursor keys. Not so much for sex, as for everyday walking around in shopping malls (etc.) This should be much simpler to do than full motion capture of gestures and facial expressions, because all the client program needs to extract from sensors is {forward, back, turn left, turn right, up, down}.

6.

I'm sure people will figure out what works for them, as far as cybersex goes.

I'm more curious whether this could tap into that "move and have fun" thing that everyone likes the Nintendo Wii for. You could have exercise or yoga classes in Second Life, networked and connected while you do your star jumps and whatnot in RL in front of your HDTV.

Seeing how something like the Wii deals with what is literally translated movement and what is automated (you swing your bat in baseball, but you don't run because that doesn't make sense) might lead to finding a good balance for all activities, (un)sexy or otherwise.

7.
I'm more curious whether this could tap into that "move and have fun" thing that everyone likes the Nintendo Wii for.

Lots of people like "dancing" in Second Life, which at the moment consists of animating your avatar with a pregenerated animation that isn't even synched to the music. The slightly more sophisicated version gives the user a choice of dance moves, via a pop-up menu or a head-up display. It would be interesting to trying using actual body moves to trigger avatar animations. It probably wouldn't need to be full motion capture, but could be more like Dance Dance Revolution, where you just need to step on the right part of the mat at the right time. (Or make the right motion with a Wii remote).

Lag is a serious issue: even if you're dancing in time to the music as it's played locally through your own loudspeakers, by the time the motion events have been transmitted to the server and then on to other clients, other users will see you as being out of time. If one person is dancing for an audience, then you can potentially solve the problem by having the audience hear a delayed version of the music. For more collaborative dancing, you could take the same approach that's been used for collaborative music making, which is to think one bar ahead. (I do now what I want the others to see or hear in the next bar).

One time, I went to a yoga class in Second Life. This really, really doesn't work. Firstly, it needs to be your own body, not your avatar that poses, and secondly, the instructor needs to see in detail what you're doing to make sure you're doing it right.

8.

Might be old news, and wrong place for this but:
Why not use TrackIR or similar? Been used with success in gaming (mostly sims) for years. Maybe this is supported in SL already (haven't checked, would guess it's a relatively simple matter to write some drivers that map the axis to mouse movement/key input, or just use GlovePie) It gives you 6 axis of control (yaw,pitch,roll,X,Y,Z), thats enough to both move your avatar and look around at the same time. Or override animations for seedier use :)
I would assume that an IR reflector based solution is much less computing intensive than a 3D camera, and at the moment have superior resolution. And it's readily available today for experimenting with this type of control input and it's use in VWs. The one big advantage of a camera of course is that you don't need reflectors on you (especially for facial expressions this would be an advantage - ref. the "face masks" used for Beowulf).
To the comments about turning your head so you don't see your screen: movement is amplified so a 5 degree head turn translates to 90 degrees view change as an example.
I must say I agree with Ace and Susan, better to start looking into using WiiMotes, TrackIR, gloves and other devices and find out what works for what purpose, than to go overboard with full 3D camera mocap?
Someone has to experiment with the big toys I guess, but a basement hacker with limited budget and GlovePIE or similar can start experimenting at home today :)

9.

I usually run Second Life on a tablet PC with wireless ethernet (can even be taken to bed with you, should you wish to do so). The stylus is pressure-sensitive, so if I'm running a drawing package rather than SL I can draw a thicker line by pressing harder. Unfortunately, I don't think the SL client passes on the pressure information to the server. If it did, software like XCite could register the softness or hardness of a touch, as well as its location on the avatar's body. Might be useful...

10.

For the Terra Nova Agony Column:

"Dear Bonnie,

Sometimes I'm chatting to a guy online, and I think we're getting along really well, but he logs out just as we're starting to get intimate. What causes this?

Yours,
Frustrated of Whitby."

Being (slightly) more serious: some sessions end because of genuine software crashes (Second Life is particularly bad). Other times it may be that the guy has got bored and wants to go talk to someone else more interesting. But I can't help wondering if sometimes it's because he's got so excited he needs to go and relieve himself (this assumes that he can't type at the same time) or because he's just rolled over and fallen asleep (this assumes he was typing at the same time). It's hard to tell from just a chat log what the participants were really doing, though the increasing typing errors towards the end can be a clue. Hands-free will fix the "can't type at the same time" problem, but it won't do much about guys rolling over and falling asleep - and indeed might make that problem more common...

11.

Ugh. Creepy. :(

The comments to this entry are closed.