Suppose I went to a conference on Artificial Intelligence and introduced my talk with the following words: "I'm not interested in AI, I know very little about it, and what I do know is beyond my comprehension. I'm only interested in computer games." What kind of a reaction could I expect?
I just got back from a conference (CGAIDE 2004 *) where a speaker actually said this, only the other way round. He was an AI expert who didn't play, understand or like computer games, but hey, we might be able to find some use for the AI stuff he works on so here it is.
It wasn't just him. Several speakers were keen to explain their ideas of how AI can be used to make a game adapt so that its difficulty level changes dynamically depending on how well a player is playing it - this even though they hadn't actually bothered to find out from developers (let alone players) if this was a good idea or not. One speaker told us about crime scene investigation software, with an, "Oh, and you can use this for computer games probably" tacked on at the end. It happened time and time again.
What is it about studying computer games (or any kind of game) that gives researchers from other disciplines the right to look down on us, patronise us and ignore our work?
Richard
* Disclaimer: several papers were both about computer games and very good. Because of them, I won't be asking for my money back...
While not quite as dramatic - there was a similar tension at this year's AAAI Games AI workshop. I believe Michael Mateus states the problem most succinctly (as well as suggesting the road forward on how both sides need to engage/learn from the other):
In general there was an uncomfortable split at the workshop where AI designers for commercial games focused on creating “fun” AI, and often mentioned cheating, while academic AI researchers focused on “correct” game AI that finds optimal solutions (e.g. crushes the player at every opportunity). In my talk I pointed out that creating an engaging player experience is a first class AI research problem. The choice between cheating or pursuing a traditional AI research agenda is a forced choice; expressive AI is a third option.
from:
http://grandtextauto.gatech.edu/2004/08/04/aaai-game-ai-workshop-trip-report/
Posted by: Nathan Combs | Nov 11, 2004 at 09:37
I share the frustration, Richard, and this was kind of the point of my intro remarks at the Culture panel at SOP II -- those in more "serious" disciplines feel very comfortable taking the approach that games and play are silly and frivolous by definition, so these kinds of "I don't know anything about games" statements are viewed (by those making them) as perfectly reasonable. There no need for interdisciplinary deference and awareness, because there's a very prevalent perception that there isn't anything serious to be said about games and there is no respectable discipline that would concern itself with play and games.
So, okay, whatever...
The big question for me isn't why people take this position (it's fairly common, probably much more common than the alternative), it's why people would want to attend conferences with that much slack in the mix. The answer, as you imply, is that these kinds of statements, while amazingly widespread, are probably in the minority most of the time.
Posted by: greglas | Nov 11, 2004 at 10:48
Hmm... AI is a second class subject in computer science, and computer science is a second class subject in pure mathematics...
That's not a problem is it? It's natural that the more generally applicable sciences will view the applied fields as secondary and less essential. And in a way, that is true.
Posted by: Ola Fosheim Grøstad | Nov 11, 2004 at 11:36
Well if they are being snobby about it you could always point out to them how sad it is that they can find no better place for their life's work than by pitching it to game developers :)
Posted by: Eric | Nov 11, 2004 at 12:16
Games have alway sbeen seen as a lesser, more frivolous concern than other forms of software. In some ways this is specious -- games tend to be on the leading edge in many areas of computing -- but in other ways it's earned by our own lack of professional process or scholarly principle. This shouldn't be a surprise to anyone who's done actual software engineering in a non-games part of the software industry.
There have been slow changes in this view of games, and it's accelerating (Lawrence Erlbaum Press is coming out with a stringently academic book on games and game design soon, for example). Indeed, the fact that there are games and AI conferences at all, and that academicians are paying attention to games even in passing, is evidence of this change.
Outside of research academia, the relationship between AI and games is changing rapidly. There is increasing interest in combining AI and games for other ends (military, security or intelligence training, etc.) and for adapting academic AI for commercial purposes, including games. We're working with DARPA and other US government agencies along exactly those lines.
That said, my recent experience with academia is that while they're very interested in applications of AI in games, there's still a huge chasm between the two worlds. In many ways this rift is as wide as that between games and, say, the banking IT industry. Games and gameplay are still outside the area of expertise for many academics, and yet it's becoming seen as a way to validate academic work. This feels familiar to me, as it's similar to how AI work was being justified by the putative business case for "expert systems" back in the 1980s. That never really panned out -- AI is known for a long string of failed promises -- but for awhile it gave researchers a way to make their work seem relevant to the commercial world, and gave them commercially oriented statements with which to bookend their research presentations.
But the academic rift aside, my guess is that AI is one of the next big frontiers for games and gameplay: AI will be to the next ten years of games what graphics technology was to the last decade.
Posted by: Mike Sellers | Nov 11, 2004 at 14:13
Why do games people have such a chip on their shoulder about their chosen field? Why are you concerned how people (much less academics) perceive your field? Perhaps there are Daddy issues here. Are you still unconsciouly seeking his approval, while he dismisses your career as folly?
The AI researcher basically paid the games field a compliment by mentioning it, but I don't think you caught it. Basically he was indicating that he WANTS them to find a use for his research (a researcher Daddy-approval issue I bet). Games are in everyone's blood and everyone wants to be a part of it, whether they admit it or not.
Games are even important to Daddy, who loves to play an old A-10 Thunderbolt sim in between lectures about noble professions and the importance of real work...
Posted by: Adam Miller | Nov 11, 2004 at 18:13
To be cynical: It could be because game companies don't have research teams dedicated to AI research. Nor, I suspect, do games companies provide grants to AI researchers.
For that matter, how many games have AI needs that are worth researching? (Morrowind's Oblivion has AI that sounds interesting though. Are there others?)
Posted by: Mike Rozak | Nov 11, 2004 at 18:22
greglas has it exactly right. If I'd heard the same people Richard heard, I'd also want to ask: if that's how you really feel about computer games, why are you here?
That said, it still might be interesting to learn more about what engenders this attitude. I suspect that to those who haven't yet gained a broad understanding of the field, "computer games" is only what you hear from the mass media: they glorify violence, lawbreaking, profanity, and the objectification of women. If you're of a somewhat leftish cast (as it is thought most academics are), this is stuff you reflexively don't want to come anywhere near.
(What's really interesting is how developers who themselves tend to lean leftward can justify creating games that emphasize things like "gun violence," but that's another thread.)
A related problem for AI academics: if your funding is federal, how would it look if you were seen in the company of people whom Congress could at any point decide are just this side of child pornographers?
Finally:
Mike Sellers> "...my guess is that AI is one of the next big frontiers for games and gameplay: AI will be to the next ten years of games what graphics technology was to the last decade."
If I may ask: Why do you think this?
I'm curious as to what role it's felt AI should play in virtual worlds, particularly over the next ten or so years. Assuming we're talking about purposes beyond pathfinding and line of sight, what else is there for AI but NPC/creature AI?
Is the point really to be able to create bots so clever that other (RL) players are unnecessary? In which case, will there be any point to massively multiplayer worlds being online?
--Flatfingers
Posted by: Flatfingers | Nov 11, 2004 at 18:47
As an academic researcher who's been doing AI research directed at games for a few years now, I've encountered this attitude from other researchers too often in the past. I often used to hear "I think games are an increasingly important application of AI research and my research applies to games" quickly followed by "Oh, I don't have time to play games. I have more important things to do." I also agree that the gap between academic researchers and commercial developers is too large right now.
That said, I do think things are getting better. I know of a pretty large number of academic AI researchers who are also avid gamers and are focused on AI that makes games more fun (and don't care about optimality). This topic came up in a recent article that I found amusing (not only because I'm mentioned):
http://www.educause.edu/pub/er/erm04/erm0454.asp
Posted by: MvL | Nov 11, 2004 at 20:25
Flatfingers, we're a long way away from making other human players unnecessary. Actually I doubt that this will ever happen, though we may eventually see AIs take their place alongside humans in games.
I think that AI is the next big frontier because we've gotten sufficiently good at 3D rendering and other similar technologies. We've pretty well wrung the technology sponge dry. Improvements from here in rendering, animation, content creation, delivery, etc., will likely be incremental and marginal, not having a dramatic impact on gameplay.
OTOH, we're in the very early stages of knowing how to create artificial characters that people care about (and vice versa), and that aren't scripted or cognitively on rails. I think that now that we know how to show good game-stories (and I don't mean to get drawn into an "are games stories?" thread). Next we need to be able to make the games and the people in them come alive. This isn't just about pathfinding nor about making a more lethal bot; it's about making actual interactive characters with personality, memories, relationships, beliefs, preferences, expressions, etc. It's a huge challenge, and one that I believe will pay off just as 3D has.
As an example, I saw a demo of EQ2 at this past E3. The demo had you on a boat along with several NPCs who would talk to you under very specific circumstances, but which were unresponsive otherwise. One of these, a beautiful elf woman, gave you a little intro task to get a goblin (now running around on the deck) back in his cage. Okay. But right after asking me to do this, she goes back to her mindless "lights on, nobody home" idle state -- while the goblin raced maniacally around her ankles. She was unaware of the goblin, the PC, other NPCs, anything. She was just a vending machine with nice rendering.
I think games like this will be much more effective when we're able to interact with characters in a world that know about you, the world, other NPCs, etc. We'll be able to tell more engaging stories (and have far fewer canned fed-ex quests) when the NPCs act more like stage actors than mannequins.
Posted by: Mike Sellers | Nov 11, 2004 at 20:41
By the way, does anyone know of any good books on NPC AI? I'm interested in planning, personality models, conversations, etc.
Posted by: Mike Rozak | Nov 11, 2004 at 21:44
Adam Miller> Perhaps there are Daddy issues here. Are you still unconsciouly seeking his approval, while he dismisses your career as folly?
Guilty as charged.
Anyway, John Laird and Mike Van Lent are AI people trying to change the perception of games within AI research:
http://ai.eecs.umich.edu/people/laird/gamesresearch.html
Also, Bob Ellickson said much the same thing at SOP II ("I don't know anything about this subject, but here goes...").
As for the triviality problem in games ... I just read a paper on this that, in my view, does the best job evar of clarifying these issues in my mind. The argument is that confusion about seriousness and games is a side effect of modernism. Very persuasive, but it is one of those "Don't cite without permission" things, so we'll just have to wait for it to come out.
Posted by: Edward Castronova | Nov 11, 2004 at 22:38
Mike (Rozak), I don't know of any good books on NPC AI (personality, conversations, etc.), and I've looked. This area is in its infancy. Wait a year or so and they should start popping up.
Re: triviality of games -- years ago at a CHI conference Randy Pausch led a panel on the Darwinian effect of the marketplace on games as an explanation for why they lead in so many areas of innovation. Essentially, since no one has to play a game, only the ones that really grab people succeed, and so there's a terrific evolutionary pressure. Back in the days of game arcades where people plunked in quarters for games, the fitness landscape was starkly clear: those games that got quarters also 'got' offspring -- more games like them. Those that didn't, died off.
The evolutionary pressure is a bit lighter today, or maybe the landscape is broader, but the fundamental conditions still apply.
Posted by: Mike Sellers | Nov 12, 2004 at 00:00
Edward Castronova wrote:
"As for the triviality problem in games ... I just read a paper on this that, in my view, does the best job evar of clarifying these issues in my mind. The argument is that confusion about seriousness and games is a side effect of modernism. Very persuasive, but it is one of those "Don't cite without permission" things, so we'll just have to wait for it to come out."
Has anyone here read Bruno Latour's "We have never been modern" ?
He brings an interesting perspective on the whole passion/reason partition modernist meme, and how it applies to western thinking post 17th.
It seems very relevant to the sociology and collective psyche of software and games designers/researchers.
[Caveat: I didn't read the english translation by Catherine Porter, and some readers seem to have found it difficult to read. I honestly couldn't say who to blame, if it is, since Latour's writing is pretty densely packed in this specific book.]
Posted by: Yaka St.Aise | Nov 12, 2004 at 02:06
greglan>The big question for me isn't why people take this position ... , it's why people would want to attend conferences with that much slack in the mix.
Well personally I attended the conference for two reasons: to find out what the state of computer game research is like in the UK now I have a research post (answer: very good in places, very bad in others); to increase my research standing by the way the academic review panels judge it (CGAIDE was vastly inferior to SoP2, but had IEE sponsorship).
What particularly annoyed me was that it had an "impressive" 12% rejection rate for the papers (!) yet they still managed to get in two that were almost identical to lectures I give to my students. One differed only in the innovation of using an over-estimator in the A* alrogithm (?!) and the other differed only in that I described BREW to my students and this chap didn't describe it to us.
Richard
Posted by: Richard Bartle | Nov 12, 2004 at 03:34
Richard -> Well personally I attended the conference for two reasons: to find out what the state of computer game research is like in the UK now I have a research post (answer: very good in places, very bad in others)
Which are the, in your opinion, the good places? What research do they do and why do you find it good?
Also; i had a look at the program of the IJIGS conference, what sessions/papers did you find worthwhile?
Posted by: Mirjam | Nov 12, 2004 at 07:41
Adam Miller>Why do games people have such a chip on their shoulder about their chosen field?
I have the chip on my shoulder because I see money that should be going to games research being diverted to AI (and other) research that claims to be about games but isn't. Some AI research genuinely is about games, but others just use it as a potential application so as to sex up what is otherwise mainstream AI (or whatever) work.
>Why are you concerned how people (much less academics) perceive your field?
Because if they base their decisions on false perceptions, that can make life for us very difficult. AI itself was set back 10 years in the UK after a publication by a physicist (the "Lighthill Report") recommended that it wasn't worth funding. I don't want to see that kind of fate befall the nascent field of computer games research.
>Perhaps there are Daddy issues here. Are you still unconsciouly seeking his approval, while he dismisses your career as folly?
Nice try, but there's no hero's journey here. I already dismissed my career as folly; my concern is that the same fate could await people just starting out on their careers.
>Basically he was indicating that he WANTS them to find a use for his research
And that's a compliment? "Here's some research, find some relevance in it for your chosen field" is a compliment? It doesn't sound one to me. To me, it sounds very condescending. We can't possibly have anything to say ourselves, so we have to listen to people with no interest in the area who may give us a few scraps we can play with.
Richard
Posted by: Richard Bartle | Nov 12, 2004 at 10:09
Flatfingers>I'm curious as to what role it's felt AI should play in virtual worlds, particularly over the next ten or so years.
There are a number of areas, but the neural network people were particularly out in force, and particularly out to make games "learn" from your play. Note that they don't distinguish between AI opponents and the game itself, so having an opponent learn to defend against your tank rush is exactly the same as having the virtual world learn to put bogs in its random map generation so you can't use tank rushes.
Richard
Posted by: Richard Bartle | Nov 12, 2004 at 10:16
Mirjam>Which are the, in your opinion, the good places? What research do they do and why do you find it good?
Unfortunately, I know that some of the people who attended the conference read Terra Nova, therefore if I say which papers I liked I'll be telling those people whose papers I didn't like that I didn't like them. I don't need that many more new enemies...
Richard
Posted by: Richard Bartle | Nov 12, 2004 at 10:19
Mike Rozak>By the way, does anyone know of any good books on NPC AI? I'm interested in planning, personality models, conversations, etc.
Crawford's book is a pretty good survery work. Chris talks through a lot of the character modelling challenges he ran into in constructing Erazmatron, and discusses how other projects (Oz, Idtension, etc) have solved those problems.
There's also a Charles River Media book, which has less pithy wisdom, but more source code.
Neither is a definitive tome on the subject, but both are worth a read if the subject interests you.
Posted by: Joseph Breitreiter | Nov 12, 2004 at 12:45
Thanks for the responses to my questions. I should probably mention that they weren't aimed at knocking AI in games -- just the opposite. I also remember how AI was oversold in the '80s, particularly (as noted) in expert systems, and I hope not to see a similar fate befall the construction of virtual worlds.
After giving all the responses some thought, I believe my original question still holds: if the future of AI in virtual worlds is to make NPCs more human-like, isn't that likely to lead us to a place where we no longer need many other humans in our worlds?
Note that I'm not suggesting that many people will not *want* human interaction -- just that when interaction with humans is no longer necessary to get what one wants, then there will be a lot of people who can happily do with the AI substitute.
My belief is that most people in virtual worlds today -- gamers -- want a beatable challenge. They don't care whether it's live or AI as long as it meets those two criteria: it has to be a challenge, and it has to be beatable. The utterly utilitarian nature of most gamers suggests to me that if AI gets good enough to satisfy this sort of PvP Turing test, then there's little reason to need other players.
(OTOH, it's possible that what many PvPers actually enjoy is the thought of making other people upset... but if that's really the case, is it ethical to use AI -- or anything else -- to improve the PvP experience?)
One thing that may minimize this effect will be the desire to play with friends. (Victory is hollow unless witnessed by people whose respect one wants.) But at best that suggests small collections of people -- basically LAN party size -- rather than massively multiplayer systems.
Of course AI isn't just about adding tactical awareness and decision-making to NPCs. As noted, it can also be used to give NPCs (and other objects in the world) more lifelike aspects, greatly enhancing roleplaying and storytelling. (Wouldn't it be interesting to see Doug Lenat's CYC embedded in a virtual world's NPCs?) This kind of thing is attractive to me personally, and there are probably others who feel the same way, but if I'm being honest I have to wonder whether there are enough of us for the technology to earn back its development costs.
I'd also be interested in seeing AI used at a strategic level. What if the virtual world itself was designed to adapt its high-level rules over time to respond to aggregate decisions by participants in that world? Whether through neural networks or genetic algorithms or some other construct, the world would have the ability to change itself to achieve the strategic goals defined by the world's developers.
But as interesting as this sounds, what I don't know is whether it can be shown to have enough of a payoff that virtual world designers would be willing to implement it. To be blunt, is strategic-level AI sexy enough for any developer to want to spend the time and money to do it?
I'm just not seeing any way around it: strong AI in virtual worlds is most likely to show up embedded in NPCs, which in turn seems likely to make massively online systems unnecessary.
But who knows -- maybe some social virtual world will become the killer app that pulls in tens of millions of regular participants, and that will become what we think of when we say "virtual world" instead of today's "MMOG" definition.
In which case, AI could indeed play a role... but if the point is human interaction, then who needs AI?
--Flatfingers
Posted by: Flatfingers | Nov 12, 2004 at 14:15
Flatfingers>if the future of AI in virtual worlds is to make NPCs more human-like, isn't that likely to lead us to a place where we no longer need many other humans in our worlds?
If they're indistinguishable from humans by their behaviour, yes indeed.
>They don't care whether it's live or AI as long as it meets those two criteria: it has to be a challenge, and it has to be beatable.
So if the game changes so that if you can't beat it then it lets you, or if you can beat it then it makes it hard (so you "only just" beat it) would that be OK with you? Or do you prefer not to have the game change while you play it?
What if it's not the game itself, but the AI opponents? I've been playing a game recently, Locomotion, where the opponent AI follows what I'm doing and then tries to do it first. I'll be looking at a possible route, then see railway tracks appear that steal what I was going to do. If I look elsewhere, the AI doesn't build those tracks. So now, my game isn't to build stuff I want to build, it's to look at half-OK routes hoping to tempt the AI to waste time building them while I zoom off to where I REALLY want to build the tracks. I can't tell you how frustrating this is...
>(OTOH, it's possible that what many PvPers actually enjoy is the thought of making other people upset... but if that's really the case, is it ethical to use AI -- or anything else -- to improve the PvP experience?)
Is it ethical to create intelligences that you know you are going to destroy the next time you reboot your system?
>What if the virtual world itself was designed to adapt its high-level rules over time to respond to aggregate decisions by participants in that world?
People would stop looking at it as a world, and look on it as another thing to be gamed. They'd train it so it did something dumb, then rapidly exploit it before they could react. They'd be playing a game, but not the one the virtual world is itself set up for people to play.
Richard
Posted by: Richard Bartle | Nov 12, 2004 at 16:19
Flatfingers>if the future of AI in virtual worlds is to make NPCs more human-like, isn't that likely to lead us to a place where we no longer need many other humans in our worlds?
Richard> If they're indistinguishable from humans by their behaviour, yes indeed.
Uhm no. Fucking a doll isn't the same as fucking a girl that loves you. Even if the doll is a better performer...
Posted by: Ola Fosheim Grøstad | Nov 12, 2004 at 16:48
Mike Sellers and Joseph Breitreiter - Replies about AI books.
Thanks for the info. I just read the Crawford one (which I liked), and I'll order the other today.
But isn't it interesting; if I had asked "Does anyone know any good 3d game-development books?" I'd be given a list of 20 to choose from. (There are probably a 100+ 3d related books on Amazon.com)
Back in the late 1980's there were only a handful of 3d books, the main one by Foley and VanDamm. Will it take 20 years for AI to get as big as 3D is now? 3D research was concerned soley with movies, commercials, and military flight simulators then. One of my profressors was working with physically based modelling (simulating jello and chains), which has just recently appeared in games. Another was doing fur (teddy bears), which has yet to appear in anything but an NVidia demo.
Flatfingers - if the future of AI in virtual worlds is to make NPCs more human-like, isn't that likely to lead us to a place where we no longer need many other humans in our worlds?
My POV is:
- AI to do the in-game work that players don't want to, like being farmers or guards.
- AI as part of the game... players employ NPCs or pets, whose AIs they "program/train" to undertake actions. These actions could take place even when the player logs out... Just think of the Monty Python and the Holy Grail scene in the castle: "You stay here and guard the door. Don't let anyone in..."
- AI to reduce the ratio of players to NPC villagers, so that immersion isn't ruined as much by players who aren't role playing. (Not that I expect players to role play unless they want to, most of which don't want to.)
- AI whose purpose it is to make the game fun for the individual player; kind of like a personal Dungeon Master.
Posted by: Mike Rozak | Nov 12, 2004 at 17:08
Richard> if the game changes so that if you can't beat it then it lets you, or if you can beat it then it makes it hard (so you "only just" beat it) would that be OK with you? Or do you prefer not to have the game change while you play it?
That depends on the purpose of the game. If the game is meant to be a one-time experience, then a static challenge is preferable; if the game is meant to last (as in persistent worlds), then I think a dynamic system that scales successive challenges to just below a player's demonstrated skill level is preferable.
As long as individual tactical challenges remain (on average) at the "just beatable" level, player interest and satisfaction are maximized over the medium term.
Does that seem manipulative? Or is it simply good customer service?
> What if it's not the game itself, but the AI opponents?
Same answer. Individual tactical challenges, whether instantiated as missions or NPCs, need difficulty levels that remain static while you're working on them. Moving the finish line during the race seems too much like penalizing success; allowing game AI to have access to information or capabilities that players don't have seems like cheating. As you noted in "Locomotion," it feels abusively unfair, and that's just not fun.
But once a static challenge is successfully overcome, there needs to be another challenge available that's harder. Individual challenges need static difficulty levels, but successive challenges need to scale in difficulty according to the player's demonstrated ability.
The bottom line is one of perception: If it's too easy, it's not fun. If it's too hard (i.e., ultimately impossible to win), it's not fun.
And since "easy" and "hard" will change as a player improves in skill, using AI to scale individual tactical challenges to that player seems not only reasonable but desirable because that's how you ensure that each player has fun over time.
(I keep using the phrase "individual tactical challenges," BTW, because strategic-level challenges work differently. They don't require the same amount of catering to player peceptions because, being more abstract, they are less perceptible to the typical player. That means players will tolerate a lower level of victory in the strategic game, possibly even < 50%. At the extreme, you could consciously create a game in which overall victory is impossible and people would still play it... as long as individual challenges were winnable on a sliding scale. Eventually players might realize they're in a no-win scenario, but by then you've already got their money and brought in a replacement level of newbies.) (Please note that I'm not encouraging this kind of attitude; I'm just saying that it's possible to design such a game based on the average person's lack of comfort with thinking about anything other than here and now.)
>> (... if [what many PvPers actually enjoy is the thought of making other people upset], is it ethical to use AI -- or anything else -- to improve the PvP experience?)
> Is it ethical to create intelligences that you know you are going to destroy the next time you reboot your system?
I would say no, but then I don't believe any such intelligences exist... yet. The day that Actually Intelligent lifeforms other than ourselves are brought into existence is the day on which we'll need to grapple with the question of whether a blackout at the local power station constitutes genocide.
Until then, the person at the other end of the PvP connection is a real person, who therefore deserves ethical consideration. So I'll rephrase my no-longer-parenthetical question: If PvP, as many of its adherents claim, is about finding an acceptable level of challenge, fine... but what if PvP is more about satisfying an ugly enjoyment of upsetting other people through beating down their avatars? Is it ethical to create games that promote this? Or would it be better to create effective AI opponents so that no one has to deal with trash-talking, corpse-raping, and the other delightful artifacts we see when PvP against people is allowed?
Is that being too sensitive? Or do people have a reasonable expectation when they pay for entertainment (as in a computer game) that jerks won't be allowed to deliberately ruin their enjoyment of that entertainment experience?
>> What if the virtual world itself was designed to adapt its high-level rules over time to respond to aggregate decisions by participants in that world?
> People would stop looking at it as a world, and look on it as another thing to be gamed. They'd train it so it did something dumb, then rapidly exploit it before they could react. They'd be playing a game, but not the one the virtual world is itself set up for people to play.
With respect, I'm not convinced this is so.
I used the word "aggregate" deliberately; strategic gameplay changes would be based on the gameplay decisions of a significant percentage of the user base. Given a sufficiently large player base, it would be virtually impossible for even the largest consciously-led group to have enough members or time to do anything that a strategic decision-making system would notice... and that's assuming that all group members would follow orders, AND that any player even noticed the strategic changes being made in the first place.
Experience makes it pretty clear that there'll always be a few observant players, so you can't count on "security through obscurity" alone. A good strategic AI would rely instead on making only subtle changes to large-scale game rules, and on basing those changes only on what a significant proportion of the player base -- a proportion far larger than the largest possible player group -- actually does in the course of normal gameplay.
A game world that adapts to its customers by automatically scaling itself to their gameplay levels sounds to me like the last type of AI that Mike suggests: one that helps the designer by handling some of the necessary ongoing game balancing chores.
Why is that either impossible or undesirable?
--Flatfingers
Posted by: Flatfingers | Nov 14, 2004 at 03:56
Ola Fosheim Grøstad>Fucking a doll isn't the same as fucking a girl that loves you. Even if the doll is a better performer...
Two things:
1) How would you know?
2) We were talking about people within a virtual world, where, by definition, you can't fuck people anyway (well, you can be, but we usually call it "nerfing").
Richard
Posted by: Richard Bartle | Nov 14, 2004 at 07:38
Flatfingers>if the game is meant to last (as in persistent worlds), then I think a dynamic system that scales successive challenges to just below a player's demonstrated skill level is preferable.
But in that case, why play? You're going to succeed whatever you do, and no matter how good you get you'll never see any benefit from it.
>Individual challenges need static difficulty levels, but successive challenges need to scale in difficulty according to the player's demonstrated ability.
This isn't what the AI people were saying. They were saying that if you approach the same challenge as a good, poor or medium player, the challenge will adjust so that you always only just succeed.
>And since "easy" and "hard" will change as a player improves in skill, using AI to scale individual tactical challenges to that player seems not only reasonable but desirable because that's how you ensure that each player has fun over time.
Isn't it the job of the player to decide whether something is easy or hard? Sometimes, you just WANT an easy challenge, but if they all adjust themselves so they're always the same level of difficulty?
>but what if PvP is more about satisfying an ugly enjoyment of upsetting other people through beating down their avatars?
This is indeed the case for some people.
>Is it ethical to create games that promote this?
Is it ethical not to create these games, so people go off and do it in reality instead?
I'd prefer to create the games but make the strategy a losing one. Let people beat on other player characters for fun if they want to, but ensure that they realise they'll get a better return on their time if they co-operate instead.
Richard
Posted by: Richard Bartle | Nov 14, 2004 at 07:50
Flatfingers wrote: A game world that adapts to its customers by automatically scaling itself to their gameplay levels sounds to me like the last type of AI that Mike suggests: one that helps the designer by handling some of the necessary ongoing game balancing chores.
Actually, I meant a game world that knows what each user likes to experience, and what each user needs to experience, just as a real-life dungeon master will tailor a game to its players.
You might think of it as a personal tour-guide that works behind the scenes and ensures that the game is as fun as possible for each individual player. It doesn't necessarily have to bend the rules to do so, either.
Difficulty level, which involved bending the rules, is a sub-set of the whole goal, and might actually be a bad thing to make automatic because then the "game" that players choose to play will be will be to fool the AI that controls the game difficulty.
Posted by: Mike Rozak | Nov 14, 2004 at 16:23
Richard> 1) How would you know?
Well, if you couldn't find out then you would just assume that everybody are NPCs... but in general, players would invent ways to find out, out-of-game. AI cannot statisfy A/S/L desires...
Posted by: Ola Fosheim Grøstad | Nov 14, 2004 at 22:07
Mike Rozak>Actually, I meant a game world that knows what each user likes to experience
But what if some users like to experience a world that doesn't know what they want (as with the real world)?
Richard
Posted by: Richard Bartle | Nov 15, 2004 at 02:59
Mike Rozak> You might think of it as a personal tour-guide that works behind the scenes and ensures that the game is as fun as possible for each individual player.
This can only work if you define the major portions of the context for the user, including setting goals for the user, which makes it less of a world and more of a narrated structure. I really hope this approach will be abandoned. I also find the manipulative connotations highly unethical. The grind is bad enough as it is, an adaptive grind will make it worse.
Anyway, there is a fairly large body of failing research on adaptive interfaces. Do a search on user-modeling...
Posted by: Ola Fosheim Grøstad | Nov 15, 2004 at 08:07
Interesting responses -- thanks!
My goal here isn't to identify The Answer, but just to explore some of the assumptions we make when discussing what AI in virtual worlds (and their online games subset) is for.
With that said:
>> a dynamic system that scales successive challenges to just below a player's demonstrated skill level is preferable.
> But in that case, why play? You're going to succeed whatever you do, and no matter how good you get you'll never see any benefit from it.
A challenge doesn't mean you'll automatically win -- it means you have a *chance* to win if you play the game well. It's not the certainty of winning that keeps people playing -- it's the possibility of winning.
> Isn't it the job of the player to decide whether something is easy or hard? Sometimes, you just WANT an easy challenge, but if they all adjust themselves so they're always the same level of difficulty?
That's a fair point. I personally tend to agree with those who hold that players should have the power to determine their own level of challenge, so I'm good with making "adaptive challenge" a user-selectable option.
In other words, I'm not saying every challenge must be just beatable; I'm saying that just-beatable challenges must be available.
If a player prefers to spend hours beating up on newbie mobs because it's safer, he should have that option (as long as it doesn't seriously undermine the availability of these mobs for true newbies); ditto for "impossible" challenges. If he ever tires of this and becomes ready to test his limits, scalable challenges will make that possible.
...
I wonder if we're considering this question on different levels, relating to how we answer the question "what are these virtual worlds for?"
From the standpoint of someone making a commercial online game, it's about structuring the game to induce people to keep playing -- you give them just enough of what they want to keep them coming back for more.
From the standpoint of the educator, thinking of these worlds as limited to mere entertainments to be won or lost is too constricted. Instead, virtual worlds should be understood as teaching tools, as extensions of the physical world in which life lessons (both harsh and profound) can be learned for application in the physical world.
And from the standpoint of the philosopher, even the educator's view is too narrow: these virtual worlds are properly thought of as new places in which to learn things about the human condition. From this point of view, not only should you be able to do anything in a virtual world you can do in the real world (including winning and losing games), but you ought to be able to do any conceivable thing.
There are certainly other perspectives, but I think these capture the majority of ways we look at online worlds. For some, a virtual world is a research lab; for others, it's a pedagogical device; for the vast majority, it's just another entertainment source. It's not impossible for an individual to be able to move between all three of these perspectives, of course, but most people don't.
If we're going to talk about what AI is for in virtual worlds, we probably ought to start by being clear about what we think virtual worlds are for.
--Flatfingers
Posted by: Flatfingers | Nov 15, 2004 at 16:36
Flatfingers> If we're going to talk about what AI is for in virtual worlds, we probably ought to start by being clear about what we think virtual worlds are for.
Freedom.
Posted by: Ola Fosheim Grøstad | Nov 15, 2004 at 18:35
In my response to an AI that understands what the player likes, I had two posts:
Richard Bartle wrote: But what if some users like to experience a world that doesn't know what they want (as with the real world)?
Ola Fosheim Grøstad wrote: This can only work if you define the major portions of the context for the user, including setting goals for the user, which makes it less of a world and more of a narrated structure. I really hope this approach will be abandoned. I also find the manipulative connotations highly unethical. The grind is bad enough as it is, an adaptive grind will make it worse.
Here's an example of my thinking, as related to a story:
When reading/watching a story, I like backstory and characterization. I don't like too much romance or combat. If I could tell a book this information, and (of course) have the book intelligently listen to me, the book could tailor its content towards my likes without having to change the plot or anything major.
A technical implimentation of this approach might be for the author to write the book. Then, for each paragraph (or chapter), write a shortened version, or maybe a longer version. Each paragraph (or chapter) is tagged with what type of content it contains. When I am reading the book, paragraphs with backstory and characterization would display the long version, while paragraphs with romance and combat would show the shortened version. I'd walk away with a better "fit" for the book.
Similarly, the book could use simpler words for young readers, omit sex/violence, etc.
(Yes, I am aware that no AI is necessary for the book example. I suspect that AI will be necessary for a virtual world though.)
Now, extend this idea to virtual worlds. How, I'm not exactly sure. I have vague ideas floating around my head.
Of course, "level of difficulty" settings are a partial implimentation, and of course, they have problems of their own. It's all a tradeoff.
Richard Bartle's comment: As far as the player not wanting the world to mold itself to the players needs... fine. The player can turn it off or on.
I somewhat dispute the assertion that the real world doesn't mold itself to a person's needs, perhaps through a technicality of definition. In a virtual world, NPCs are usually considered part of the world. In the real world, if you are sick, people tend to be more compasionate and willing to help you, like go to the store and buy medicine. Would NPCs not do the same in similar circumstances?
If an NPC king knew that you (or rather, your character) enjoyed killing ogres, wouldn't he mention to your character (and hence you) if any ogres were nearby?
Isn't it conceivable that the NPC king would let it be known, in idle conversation, that he knows someone who likes killing ogres? Then couldn't it be generally known by NPC AI's in-the-know that your character likes killing ogres?
I'd call this an instance of the world customizing itself to the player.
Ola Fosheim Grøstad comment: As far as thinking about it as an adaptive grind... that assumes that a virtual world can only be a grind. I think/hope it's possible to have a virtual world which isn't a grind.
Posted by: Mike Rozak | Nov 15, 2004 at 19:04
In response to Mike Rozak. Having the Ogres hate you because you kill Ogres isn't really all that adaptive. Having Ogres pop out of the bushes might be, but it will very quickly get annoying if the system keeps harrassing you with Ogres when you want to have cybersex... That is the key issue in adaptive systems, you cannot know what goals the player is persuing unless you severely narrow down his options... Thus, bye bye world.
There are other possibilities of course, and some are ok, but a lot of them can easily lead to manipulative entrapment designs. Those make me sick, and I think it could give the whole genre a bad name, well it already has a bad name, but still... (I am not going to share any ideas as I don't want to see them implemented! :-)
Posted by: Ola Fosheim Grøstad | Nov 17, 2004 at 19:36