« Q&A with Google Lively's User Experience Designer | Main | YouTube Social VWs Tour »

Aug 20, 2008

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c022953ef00e55411842c8834

Listed below are links to weblogs that reference [speechless]:

Comments

1.

I still have a hard time believing that's not faked. Reminds me of killzone...

2.

That's not "animation," it's really fantastic motion capture. Which means that there's a long way to go to get it to happen seamlessly in real time.

3.

According to the article, the team that produced this "started with a video of an employee talking" and "then recreated the gestures, movement by movement, in a model", operating "at the level of individual pixels" in the video.

Given that this was made from a video of a person, while what they've done is an accomplishment, it's not clear to me how original this really is -- the re-mapping of pixel-level variations seems to be more along the lines of colorizing black-and-white film than truly creating an avatar that appears indistinguishable from a human.

Aspects of human interaction include eye gaze and movement, head tilt, head motion, blink rate, micro-expressions, gestures, and the like -- in addition to what's being said and how it's being said. All of these add up to a great deal going on at any given time. Further, these aspects are all inter-related and are driven by a multitude of factors from unconscious emotions to cultural values, none of which, I'm confident, are actually being created in this video.

So, interesting video, but it doesn't look like we've seen an avatar that passes a visual Turing Test just yet.

4.

Not sure that's past the Uncanny Valley yet either. I couldn't stop staring at the lip/tongue movement, which, while better than most video games, still has a long way to go.

5.

Is it possible to objectify a pretend woman?

Because damn... Emily is hot. Thats frighteningly realistic.

You can sorta see it after the second viewing once you know where the mesh is, (Her skin tone on the front of her face isn't *quite* right), but its pretty convincing regardless.

I'm actually fascinated by the hand animations. Either they have that absolutely *nailed* down , or those hands are real.

6.

Isn't anybody else sick of this ridiculous cultural obsession with photorealistic CG?

If you want photorealism, why not just... you know... look around. Take a photo, maybe. Besides, like the 'realistic' androids out there, Emily's just a puppet. It will always take an artist, beit actor or animator (who are basically actors, when it comes down to it) to bring a virtual corpse to life.

7.

Which costs more: capturing and replaying the literal range of motion the game requires, or capturing, deconstructing, and reconstructing the 'base' motions you need to form the full range of motion your game requires?

I see this as lowering the barriers to becoming an amateur director eventually, but I'm not sure what else it does.

8.

The eyes don't saccade. And I agree above that this is fantastic motion capture more than fantastic animation. But it's impressive otherwise.

9.

Something with this is still highly unpleasant, a failed human.

10.

The woman they motion captured also seems a bad actor..

But really, I don't get what all the fuss is about.

I found Gollum (from the recent LOTR movies) more believeable and life-like, and it's the same technology, right?

11.

Wow a really really expensive video. Why not just use a video hey?

12.

here's a video of how it was made:

http://www.youtube.com/watch?v=SwAV2fXoy6E&NR=1

basically they are applying motion capture to facial animations with improved ease-of-use.

that ease-of-use doesn't seem like a lot, but from an virtual-world building point of view, its very easy to see the potential:
turning on your webcam so that your avatar's facial animation would be the same as your own? how is that for augmented voice chat?

well the answer for now i suppose is bandwidth.

but if they start analyzing statistical data and colorations between animation pieces, then eventually they will be able to form algorithms based on variable inter-dependencies, not only reducing the bandwidth (less variables to transfer) but also allow easier animation manipulation which AI & scripts could play with, and get us closer to real time procedural facial animations.

The comments to this entry are closed.