« Virtual Property Redux | Main | Terra Nova Welcomes Ren Reynolds »

Dec 20, 2003



Shannon Appelcline wrote an article sometime back that spots on this issue...



While I was with UO, I tried out several forms of reputation system. The first was something called the notoriety system, whereby the game system attempted to detect good and bad actions, and adjusted a stat on the character based on their history of actions. It led to all the bad guys having sterling reputations and all the good guys with terrible reps because they were willing to sacrifice their good stats in order to take down the bad guys (who had great reps through abuse of the system). I suppose that in some ways this is an accurate simulation of real life.

After that failed we moved on to one where transactions were assessed by a human, rather than by the computer. This was explicitly called the "reputation" system as opposed to the notoriety system, and was directly inspired by eBay and Slashdot. Each murder you committed gave the victim the choice to report you, and to submit cash towards a bounty on your head. Other players could kill you, turn in your head, and get the cash.

The result was predictable in these days of mathematical modeling of negative rep systems (see the work of Toshio Yamagishi, for example). Numerous tricks had to be put in place in order to curtail people working off the murder counts over time (we believed that people needed to be able to reform, which led to people "macroing off murder counts" in their homes among other behaviors). UO never tried a straight positive rep system, despite dev flirtations with trust networks and various other methods. After I moved off the team, the UO lands were split and the experiment in reputation-based social controls ended in favor of a PK switch.

Guilds in muds were almost always software mediated. UO did not launch with guilds, but had them in place with code before the first expansion came out, as I recall (I wrote the guild system, but I am hazy on the date). EQ's system was somewhat less robust, and DAoC's is somewhat more robust, with explicit support for ranks.


Dan Hunter>Anyway, this got me thinking about reputation in virtual worlds...

Congratulations. You have now reached 1997 in the MUD-DEV archives.



Well thanks, Richard, for pointing this out in your usual welcoming style. I've always been a little slow.

But since you bring it up this way, let me say that part of the reason for posting this was to query whether have moved past reputation systems circa 1997. The only one of which I'm aware that's launched since around then with rep as explicitly coded is TSO, and this hasn't been an unmitigated success. And as Raph points out, it's been dropped in UO.

So, as you might put it Richard, congratulations, your field hasn't progressed in software coding of reputation systems since 1997.


Second Life has an explicit positive/negative/neutral rep system similar to eBay's. You can have an opinion about everyone, either by meeting them in person or through anything that they have built. You can rate their behavior, their avatar, and their builds separately, and rating costs a small amount. You can see both the positive and negative ratings of other people, as well as how they've rated others. Your rating factors into your weekly stipend, with the positive outliers receiving a significantly larger amount of L$ on a weekly basis.

While we've seen the expected rate mining, we've also seen that the positive and negative outliers strongly correlate with actual behavior. When the act of rating acquired a small, non-refundable cost, the rate mining did drop as well.

Perhaps the most interesting behavior has been the high social cost of negative ratings. Not the cost of having received a negative rating, which has some negative effect, but the social cost of giving a negative rating. The world currently is divided into two types of users: those who feel that a negative rating is the right response to a slight and those who won't give negative ratings. The non-negative raters also tend to place a much larger weighting on receiving a negative rating. We have tracked negative ratings over time and have noticed a gentle upswing in both rate and number of users who give them, which we think is healthy as the population continues to grow and diversify.


There's also Advogato, a news-feed sorta site with an experimental trust metric. I don't frequent the place, so I couldn't say anything else about it, but it might be interesting to look at. It's certainly different than any other reputation system I've seen.



Making an on-line reputation system is easy. Making it diddle-proof is hard. I've never thought of a system that would prevent the bad apples from cooperating to get a better rating than they deserve.

I didn't play UO for more than a month, but I remember reading about this sort of thing happening. PKs would (often with permission) kill each other to improve their own reputations and so on. There are people who exploit ebay in the same way. They make a few sales to get a good rep and then start taking payments without delivering the goods. They can usually make several sales before the negative ratings pile up, after which they can simply make a new account.

Most games these days have an /ignore feature, which I use as a personal reputation system for the worst cases. As for positive reputations, I memorize guild names or individual's names (or use a /friend list if there is one). This works sufficiently well that I'm not convinced a universal reputation system is necessary or even desirable as it could encourage me to place trust in the system rather than the individual.

What I'd like to see instead is simply an expansion of the /buddy and /ignore lists. Perhaps one that allowed comments as a memory aid, i.e. "/ignore Roxordood Sold me an empty bag" or "/friend Cuteypie Tipped me 500k gold for making her a new sword" or "/ignore Trainsrus Repeatedly pulled dozens of swamp dragons to our team" or "/friend Healerman Is a very good cleric, saved the team three times" etc. Color coded names (either above the head or in a list) would show how *I* judged these people without worrying about their game-wide reputation. Another command would bring up all the comments I've made about a person.

If the comments were searchable by the staff, it could be of use to CS as well.


From my playing of UO, I'd have to agree with AFFA that the best system is one where I get to judge. My pipe dream has always to be able to add a tag to anyone's name. So, I could add a tage [Griefer] to someone, and any time they show up, I get the tag. These tags would be stored client side rather than server side, so not be a source of lag. Further, I'd like to see simple tools for guilds to be able to share their tag lists. Thus, all who trust me can import my tag data base and see the proper [Griefer] tags.

- Brask Mumei


Oh, Brask and AFFA's comments reminded me of something that I meant to mention in the post. The other significant software reputation coding system that works (kinda) is P3P -- Platform for Privacy Preferences. Indeed any of the W3C platforms--especially their porn rating mechanism, PICS--are explicit attempts to code for client-side "reputational" metrics of untrustworthy 3d parties. (yes, I know they differ from this, but bear with me for a tick)

Brask's idea of "sharing" the tag base is just like PIC's choice of rating mechanism. If you want to choose the Christian Coalition's rating schema, then you can. Alternatively you want the ACLU schema, then pick that. If there were some schema instituted for reputation by competing reputational raters, then you might well see competition in rating agencies, à la Moody's, S&P, etc. Guild X creates a rating by collectively polling members, and you can choose this if you happen to think highly of Guild X.

Of course, the trick is in the defaults here, since if you default to PKing guild then you're gonna get a particular outcome.

But defaults and this type of rep system seems to me to be waay more tractable in VWs than elsewhere.


If you could overcome obstacles with processing and storage, you could get much more than simple reputation data from an avatar. Avatars *are* data, so you can know them in toto theoretically (again, ignoring storage and processing).

Which is interesting, because no one seems to have gone very far with this. While one can learn certain worthwhile things with a "look" or a right-click, this rarely goes beyond level, health, affiliations, etc.

I wonder if this is about privacy/conforming to RL expectations/limited imaginations/limited resources. Probably a mix of the four.

Oh, and fwiw, Richard mentions privacy issues at 681-82, and MUD-Dev circa 1997 touched on it in a thread at: http://www.kanga.nu/archives/MUD-Dev-L/1997Q4/msg01047.php



2-player laboratory games have shown that people who get ripped off will tag the offender, with glee, even at great cost to themselves, and yes even when the odds that they will ever meet that person again are nil. I should go look up the cites in the experimental game theory literature, but, well, it's the holidays.

If people will spend money to tag baddies, then it seems to me that having a simple tagging system with a high cost of tagging should get the job done.

Want to make the system ironclad? Make all trades asynchronous. Nobody's going to trade with someone they don't trust. Hence, having a bad tag really hurts. And having no feedbacks at all would be a problem too. That's why eBay works. The earlier comment, that people can 'game' eBay's rep systems, misses the mark, it seems to me. eBay has successfully become the largest market in the history of humankind, even though all its trades are asynchronous and anonymous. VWs should just port in the eBay system, in my view.

Of course, beyond that, I really really wish I could get more info on people. Example: I'm guild hunting in Horizons right now. How I wish I could get a listing of guilds by size, average age of members, number of kids per member family, degrees earned, fraction self-reporting a strong interest in role-playing, and so on. I know, I know - the devs don't have that info. At least not perfectly. But they don't need it to be perfect. If you had players fill out a survey, sure, lots of people would lie. But aggregated over a guild, it's unlikely that, say, a guild where the median self-reported age was 30 actually had younger members than another guild with median self-reported ages of 20. You can have noise in the data and still preserve aggregate information (which is all we need for things like grouping and guilding).


Dan Hunter>Well thanks, Richard, for pointing this out in your usual welcoming style. I've always been a little slow.

It's not the slowness, it's the fact that you come at everything like it was majorly new. Reputation systems have been beaten to a pulp on MUD-DEV, but you talk as if you didn't know about it, or even care to look.

Some time ago, I berated you about this practice of ignoring what's happened before, and you basically told me that you'd made an active decision not to look at olde ideas from olde virtual worlds by olde developers whose olde-fashioned concepts were no longer relevant. Fair enough: so long as the readers of the blog know what they're getting. At that point, I stopped posting. Then you engaged Dave Rickey as an author, so I started again: I thought you'd recognised that there is value in experience. Now I see you back to your old ways, so I'm berating you again.

>part of the reason for posting this was to query whether have moved past reputation systems circa 1997.

This obviously wasn't a big enough a part of the reason for asking that you thought you should mention it - well, not until after it was pointed out there's a ton of stuff about this out there already.

>your field hasn't progressed in software coding of reputation systems since 1997.

And you're the man to save it from its folly?

Why don't I just list all the other big issues that have been discussed over the years in hundreds of publicly-accessible postings so you can recycle those here, too?

Greg Lastowka>Oh, and fwiw, Richard mentions privacy issues at 681-82

There's a bit buried in page 419, too, but I don't give the concept a great deal of coverage. I should have given it more.

>MUD-Dev circa 1997 touched on it in a thread at: http://www.kanga.nu/archives/MUD-Dev-L/1997Q4/msg01047.php

There's an earlier one at http://www.kanga.nu/archives/MUD-Dev-L/1997Q3/msg00402.php, but most of the big discussions came later. The most recent one began with http://www.kanga.nu/archives/MUD-Dev-L/2003Q3/msg00367.php and contains some interesting links to research in the area.



Seems like one critical difference in the way TSO implemented its reputation system is that it governs access to game content rather than moderating player interactions. The difference and its profound implications are so astoundingly obvious (to me) that I'm not sure what others might be missing.

The /. and e-bay reputation systems do not it any way limit the content accessible to any participant. Period. Those rep systems specifically provide a means of collecting information but leave ALL decisions on how to use that information in the hands of the users according to how they set their filters and what weight they choose to give to others opinions. As mentioned above this is still gameable, but only marginally.

From what I understand (as a non-player) TSO has applied their reputation system in a different way by using it to directly govern some of the game content accessible to the players? (If I'm wrong about this please educate me. ;-) If this is correct, then it's no wonder griefers are using it to, well, grief others. Is this not glaringly obvious to anyone else?

E-bay and /. style rep systems collect and distribute information (which can be gamed) but leave all action on that information in the hands of the user. TSO takes that information and uses it automatically to change the user's game experience. The PERFECT griefer tool.


Richard> Why don't I just list all the other big issues that have been discussed over the years in hundreds of publicly-accessible postings so you can recycle those here, too?

This sounds like a wonderful idea, Richard, and I say that without irony.

If this blog is useful at all, its usefulness lies partly in a certain tension between the areas of expertise of its participants. Some of us are experts in law or economics or psychology or sociology who are interested in what's going on in virtual worlds. Others are experts in virtual worlds who are interested in how these other fields might apply to their specialty. It might be nice if each set of experts were fully versed in the literature and concepts of each other's field, but it also might not be very interesting, and it's never going to happen in any case.

Instead, this blog exists, ideally, so that all these various experts can enlighten and incite each other with their varying perspectives and stores of knowledge. So if there are previously-discussed "big issues" that are not being discussed here, then ideally those of us who are in a position to know this should be posting some entries on them, issue by issue. And where others post about them, those of us who are in a position to *summarize* the relevant prior art -- whether it's in economics, law, sociology, or MUD design -- should be doing so in their comments.

In other words, yes, there should be lots of recycling going on here. And everybody who can helpfully add to the mulch is more than welcome to pitch in.


AFFA said: What I'd like to see instead is simply an expansion of the /buddy and /ignore lists. Perhaps one that allowed comments as a memory aid

I agree, and these comments/tags should be available to other players within the game. I know in some games I end up tagging a lot of people with /ignore if they are being pests in large areas. Would it help people if they could see how many people are ignoring a specific player through the command? For instance, you have c00ld00d running around a common area in a game, being a typical nuisance to everyone. You would be able to bring up his profile, and see that 234 people have him on their ignore list. That may help other players get an idea of this person's actions in the past. Maybe an additional rating on scams or something could be useful.

All the UO-related posts brings back some fond memories ;) I remember the days of the 'notoriety player killers' (NPKs). You basically had all the jerks with the best notoriety, and all the good guys with the worst notoriety b/c they had to kill the jerks all the time...ah, those were the days!


Every few years, I re-read the entire MUD-Dev archives (well, selectively, I skip posts as I go). I urge all of you to do likewise. Seems like Richard already does, or else is blessed with a phenomenal memory. ;)


Raph>Every few years, I re-read the entire MUD-Dev archives.

That's great. I try to keep up with game theory, economics, positive political economy, and cultural evolution theory. I guess Dan reads things about law and Julian keeps his finger on the pulse of new technology. Raph, Richard, and Dave know quite a lot about MUD issues. That's great! It means we all can learn a lot by collaborating, so, let's collaborate. You collaborate by sharing expertise when it offers a fruitful perspective on somebody else's questions.

Saying 'You dummies obviously haven't spent your life in my area of expertise,' is not collaborating. It hurts the process, griefs the community, and turns what might have been a fruitful exchange of unusual ideas into the same kind of squabbling hen-house of prestige-obsessed geeks that most academic departments are.

So look, I don't care if this was talked about ten million times on MUD-Dev. All I care about is: What was the conclusion? What did you guys come up with? I can (and did) share with you what the game theorists came up with, and I didn't do it by rolling my eyes and proclaiming your incompetence for never having read Nash, von Neumann, or Morgenstern, let alone Fudenberg and Tirole, Kreps, Binmore, Schelling or countless others who have done this issue to DEATH in the areas I have studied.

Richard> It's not the slowness, it's the fact that you come at everything like it was majorly new.

Do you have any idea how many times I have gasped when reading commentary from game developers about how their societies work? At times, they seem shockingly ignorant of literally centuries of thought in political philosophy that speaks with the precision of a laser beam to the issues they're addressing. But it makes sense: their gig is game design, not political philosophy.

Again: I firmly believe that developers have some perspectives here that Hobbes and Hume could not possibly have achieved. For the first time in history, we have people who don't just study social worlds, they build them from scratch. That's really exciting! Devs undoubtedly come up with some really cool thoughts. I'd just like to know what those thoughts are, without having to become a developer myself. Look, I'm doing you all a favor, trying to save you from the Hell that is contemporary economic research. Leave that to me, and I'll leave developer expertise to you, and we'll all be happier!


Summaries sound great. I read through some of the discussion on the link Richard provided and it didn't seem to me like the issue had been discussed and conclusions drawn. So -- what do experts and research in every field say about this? On the other side, what ideas occur to people (that may already have been suggested and/or discredited, but maybe not)?

The discussion of the ranking (like a chess ranking) from ATITD sounded interesting, though I couldn't quite figure out how the reserve bits worked.

The comments above about an expanded buddy / ignore system with world viewable comments sounded like what my own imagination comes up with when I ponder what kind of reputation guide would be ideal for an Everquest-like, massively-multiplayer game with no PvP, which is the kind I like best.

Here's what I came up with (discl - no research done to see prior work)

1) Allow users to buy a main account and discounted secondary accounts. These would allow a second or additional simultaneous login, and would allow the game's manager to track what people belonged to the same family/group/identity. For the sake of reputation, these people's rankings of one another could be factored out if desired. Main accounts should be more expensive than currently, say 50% more per month, and additional accounts should be about half what an account costs now. That way 2 accounds would cost about the same as now, and more than 2 would be less. Cell phones already have similar plans to this, as does AAA membership and etc.

2) Allow players to assign some number of levels of trust/distrust to other players. A radial menu on the person similar to the radial menus on the Sims would be a nice interface, but there are other options including a purely text based one or a window that pops up when you /buddy or /ignore someone.

At all trust and distrust levels you could give a comment on the person if you wished. This comment could be edited but edit history on the comments should be retained. You could also give the person various numerical ratings. I suggest rating them from on honesty, manners/friendliness, and skills. Scale could be 3 or 5 point, with the middle being default/average.

The levels of trust would allow you to:

- put a person on your buddy list

- allow a person to send you tells in a special color or to open in a new window

- allow a person to log onto your character(s) when you weren't on at one of three levels:
a) cannot touch the character's inventory, or cross zone lines, or communicate. Shows the character with a special flag "bot" meaning the real player is not on. Can memorize and cast spells, and use most of the character's innate abilities.
b) can manipulate the character fully, including communication, and zone. Can loot items, accept trades, and add items to inventory, and move them around, but cannot remove or trade or purchase items. Otherwise like a)
c) full access to character's inventory; this is for people most fully trusted.
The bot flag should be a differnt color for each level, maybe yellow for a, white for b, green for c. There should always be a way for somoene to view who it is logged in as the character. maybe a /whobot that you could do that would show the official game name of the person (which might or might not be the name of their character or one of their characters, but would also be viewable from the character. A person could get other of these official game names by using a secondary account. Potentially only people at a certain trust level or above would be able to see your game name, but when you were botting, it would be those the owner of the character trusted who could see it.

- allow others access to your bank and/or buildings' storage rooms. This should also have a/b/c levels, where you can give someone access to a areas of bank/storage, a&b, or all 3.

The levels of distrust would allow you to:

- put someone on ignore
- put someone on a ban from you, where group invites would never be seen if that person was in the group, though you could set it to instead have the invite show a message "if you wish to join this group, you will have to remove your distrust level 2 on Evildude" - also if you were in the group, that person would not be able to be invited and the group leader would receive a similar message "you can't invite Evildude to group because he is on Protag's ban list" which would allow the group leader decide between the two. Also at this level, their spells will not land on you nor yours on them. If the game can support it, it would also be nice if they cannot affect a monster that you have engaged, or vice versa.
- put someone on a raid ban so not only would you never join a group with that person, you would also not raid with them

Notice that these lower levels of distaste have somewhat of a penalty, not just for the person receiving them, but for the person giving them.

Now to properly analyze what this data gives, you'd need some fancy analysis. I think allowing users to view a small summary with a color (green/white/yellow/red) based on detailed data mining analysis, along with a link to detailed raw data on a web page, would be the best mix of ease of use and full information.

A link, easy to find on the game's web site, should be publicly accessible at least for people with game accounts, and allow you to view average ratings, all comments, aggregate trust/distrust numbers, and filter them by: guildmate, former guildmate, same account, analyzed friends sphere(s) if any (a bunch of people who all rate one another highly who the person is a member of), analyzed enemies sphere(s) if any (a bunch of people who mostly rank one another highly who all rank this person badly)

You should also be able to click any comment listed and see stats on the interaction between the player rated and the player who wrote the comment: hours grouped together, total hours played, guild membership history, hours raiding together, hours in same zone not grouped, all trades done between the characters. This will give you an idea if someone is rating someone they completely don't know or not.

This would take a LOT of data and analysis.

It would also probably only be gamable in the way that any, even real life trust system is gamable: be actually trustworthy up to the point where you cease to do so in a big way... People game ebay that way, they run scams that way. I think there's no way to keep that from working, not with reputation systems of any kind.

Maybe hardware that will support this sort of thing will be around soon. I suspect if it is, and it is implemented, it will be refined and adjusted til it works reasonably well then left alone and argued over interminably. :)


Well put, Ted!

While the irony of flaming occurring in a thread about reputation (on a site devoted to online worlds and communities) is amusing, it is sad that it has overwhelmed discussion about the original posting.

Dee> Much like TN of late, commented reputation systems seem to rapidly devolve into flame wars. This is why we haven't added it to SL (although it has been discussed).

Brask> I like the personal comment system as well but we haven't had the free time to get to it yet.

Dan, Bryan> The "choose your rating scheme" dates back quite a ways. I know that I've heard Randy speak about it so it may go back to the Palace or Habitat. It seems like an excellent solution, although it requires (in the SL context) a fair amount of extra data to be slogged around.

Greg> What do you mean by "in toto"? Do you mean a larger picture of their actions in world or something else?



Well, I wasn't trying to suggest quite the tone that Ted seems to have taken away from my post. :( I had several contradictory reactions to what he wrote, that are quite off-topic and probably merit a discussion all their own. But I am putting it here anyway!

1) Who's invading whose territory? The question here is who has to do the reading. :) This can be a tricky question for this blog's community, really. Each group will tend to come at things with an assumption of basic knowledge. If you start posting about how GNP is calculated, I would imagine that Ted will say "well, you really need to look up the formula, because you have it completely wrong." Should there be some expectation that when you're discussing someone else's field, that you make some effort to grasp the basics first? And who decides what the basics are?

Rep systems are an oddball one, since there's some degree of writing on its use within the gameworlds, and a much larger degree of it within the fields of game theory and economics (the work by Yamagishi I referenced), a lot of work on trust networks in social software circles (Advogato, lots of technical peer to peer work, a bunch of books on moderation systems, a whole blog called "Everything in Moderation" by Tom Coates, etc etc... it's interdisciplinary from the word go. Really, despite Richard's suggestion and mine, reading the MUD-Dev archives barely scratches the surface of the problem.

2) The small, mean, nasty part of me goes, "but I DID have to go read Nash." In fact, I read everything all you gents write, and you're not writing design material. :) The designers do end up going to the source material in economics, anthropology, social network theory, game theory, sociology, and so on, pretty much constantly--we don't really have a choice. So the small, mean, nasty part of me has little sympathy, basically. ;) Virtual world designers (well, lead ones anyway) are generalists through sheer force and through lack of choice...

3) The flip side is, obviously, we're trying to collaborate here. Digesting information is highly valuable (and in fact, it's what most of my personal reputation as a writer in the field rests on--digesting and classification, at any rate). Providing those digests may be an implicit price of entry to the discussion. And probably SHOULD be.

And here's where Ted's complaint makes perfect sense. The challenge is how to provide said summaries. This gets back to a discussion that I have had roughly once a year with J C Lawrence, the moderator of MUD-Dev: is it is a research institution or a teaching institution? I think the same question could profitably be asked of this blog. Both are forms of collaboration, but they have very different ends.

OK, so all that said, I'll try to provide my overall take on rep systems in online worlds in the next post.


Technical version:

Rep systems online have traditionally been attempted to try to provide solid guidance to new arrivals in the world. They have primarily been a profile feature tracking past interactions and actions on the part of users (cf "profile" as a technical term) and have primarily been used in worlds with high degrees of not merely interaction but *imposition* on other users, typically manifesting as various forms of conflict, though not necessarily combat.

Reputation is typically defined even in the social software world as user-entered profiling information, as opposed to server-generated profile data. Each interaction is essentially rated by the other participants.

Sometimes the interactions are defined formally:

- UO used an instance of one player killing another
- eBay uses a completed transaction
- trust networks use exchanges of verified data
- There.com (and other systems) show you how many people have ignored someone

And sometimes they are not, and ratings are derived from a self-refreshing pool of points to be spent (rep points as currency):

- Slashdot's moderation system boils down to this, albeit with many wrinkles
- many systems of "karma points" and the like have been attempted on muds, whereby each avatar--critically, not each player-- gets a stipend of points to assign to others
- variants where you purchase points, as done in Second Life, or earn them, perhaps as part of a zero-sum game, are also common and attempted in several muds

There are, broadly speaking, two major use cases for reputations:

- assisting someone in assessing the likely success of a future transaction with a stranger

- assisting someone in assessing the likely success of a future transaction with an individual who is a node in the same network as you

The distinction between these two cases is critical, because in practice, the latter is a bounded problem and the first is not. In practical terms, one is a model for integrating new nodes into a network, and the other is a model for detecting damage done to a network.

There has now been sufficient research into the first use case (integrating new nodes) to demonstrate that positive-only feedback systems can result in a high degree of accuracy, but only over time. Negative feedback systems tend to dissolve into chaos.

In 1997 when reputation systems were first really hitting the mainstream in social software and in online worlds (UO drew explicit inspiration from eBay and Slashdot), this was not known. In fact, as recently as last year, it was not widely known in design circles (cf Sims Online, which incorporated negative reputation feedback, which is now used as an offensive weapon, and Cory Ondrejka's comments on the use of negative reputations in Second Life).

The second use case depends on second-order trust effects and is primarily intended for identifying compromised nodes within a network; this is the Advogato model as well as many peer-to-peer networking models. This model functions well, and is based on trust through verification of other trusted nodes. It's a second-order case, because it depends on your valuation of the points assigned to a node's reputation based on the source of the points. A likes B, and I like A, so I will trust B (though it is done at greater removes and with much greater complexity).

It manifests regularly informally as noted in the original article here (e.g. guilds, among other trust networks), but is rarely implemented otherwise

- in part because the informal channels work well for advanced players, and guilds are practically endemic to the problem space

- and more critically, because the model depends on second-order trust links (eg, having a known trustworthy third party or group) and is therefore not useful to novices to the world, who are the primary audience for the system in the first place and lack the necessary additional ndoes to verify trust with.

- lastly, it also depends on the problem being bounded; the difficulties in assessing an overall valuation when you are recursively tracing point valuation across multiple link distances becomes unwieldy very quickly. Speaking practically, more than one or two degrees of separation quickly become impossible to manage, and given the sizes of the communities we tend to have and the frequency of "islands" in the network map, that may not provide a reputation to a given node in all cases.

Lastly, it has been very evident that all forms of trust metrics are easily gamed. It has not been settled to my knowledge whether there is a critical threshold of size for a network within which the gaming factor becomes statistically insignificant. In most virtual worlds at the time that the experimentation was going on, the gaming factor was sufficient to completely distort the dataset, rendering the results useless. Slashdot has continued to refine the "karma whoring" prevention code, for example, and the statistics on the frequency of dispute resolution on eBay are frankly frightening. As in the real world, it may be that reputation systems have a "sweet spot" in terms of population size.

The problem here is that at their core, the main use for formal reputation systems is to allow users to take actions against other users--run away, trade, ignore, ban, group with, permit entry to a house, etc. Informal memory-based systems do just fine for all non-critical interactions. Brask's comment is essentially moot; there's basically no use for a reputation system in a virtual world except using it to curtail access--often, said curtailing being done by other players.

Because of these factors, the discussion remains open on the game design front. Simply put, there have only been a handful of designs to even tackle the problem, and to my knowledge, every single one of them ignored some or all of the information presented above.

Caveat: the last time I researched this was two years ago, with some minor touchups when I dug into network theory last year.

Bottom line: we had a reputation system in the design of SWG, used in multiple places. We ultimately decided that word of mouth would work better than anything we could code.


Raph> Well, I wasn't trying to suggest quite the tone that Ted seems to have taken away from my post. :(

No, no, I didn't get my tone from your post, it just preceded mine, so I grabbed your opening quote. An earlier draft of my post (which was nastier in a number of respects, but nicer in this one at least) specifically validated Raph for trying to do some sharing. Holy Matilda, Raph, you have THE rep for being intellectual about design issues. You're certainly not the guy I would choose as the representative of MUD-Dev-er-than-thou designers.

And actually I am not trying to single anyone out (although in this particular thread I think everybody knows what I am talking about). I was trying to make a point about practicioner / theorist issues in general and re-focus the community, to the extent a post can, on the gains to be made here. There's just no point to any of us criticizing others for not having read something. None of us has enough time, and it would destroy the benefits of the diverse backgrounds here. So let's all read whatever we read and just bring the thoughts in here. Crikey, the last thing in the world we would want to do here is replicate MUD-Dev. It's outstanding as it is. What would be cool is if the academic and theoretic focus here could co-exist with MUD-Dev's practicioner group, to the benefit of everyone. That's my point.



I find that the effects of reputation in game is most demonstrable in the compressed experiences of the *non*-persistent worlds of first person shooters. These avatars are nothing more than mere shells, and there is little in the way of character to act as a filter between the user and any observers (or one might add no roleplaying either, though I have my arguments about the lack of that in even the most persistent of worlds. But another time for that...)

In the short lifespan (often measured in mere minutes) of a given instantiation of a single avatar, you learn a lot about the person on the other side of the screen. You understand their sophistication as conveyed by movement dynamics. You can usually peg an age range based on the mere fragments of conversation you might exchange, and other intangibles such as aggression and risk taking. Most importantly, you understand immediately if that person is capable of understanding an emerging team dynamic, and if he is willing to cooperate in that dynamic or will deliberately eschew it.

And the effects of negative reputation are clear. In a compressed time frame, with little investment even possible in the avatar, there is almost no barrier (other than a few minutes connection time) to switching servers. For those in control, kicking and banning of players is common (either for offenses such as cheating or team killing) but this is not an option for casual players. Thus, when a particularly grievous offender begins to hassle a set of players (say a spawn camper that through a combination of equipment and position is difficult to kill even after multiple lives spent in the attempt), more often than not they simply drop off and go somewhere else. Encountering these persons in a clan of likeminded individuals is often fascinating in a morbid sort of way – for example, the experience of a server configured to give absolute and near unchallengeable advantage to the clan hosting, turning what is normally a contest into a mechanical killing field.

What they get out of it – from stats manipulation to the juvenile pleasures of tormenting those unable to resist – is no matter. How the community of gamers behaves in their presence is more interesting. There are those that succumb after n number of point deaths and leave for more balanced ground. But among those that will not surrender, and continue to attempt to best near impossible odds, a fascinating dynamic sometimes emerges, clearly visible in the compressed lifespan of the avatar. They feed off of the cycle of death, and seek to use the slaughter of those around them to gain what little advantage they can. In short, they are lifeboat cannibals, eking a few moments more distraction and the pause between reloads. The reputation of these individuals is not explicit in any sense – neither software coded values nor even mere commentary is possible when the shell they inhabit may change again – in both face and name – a dozen times before one might come across them again. But you know them, even in servers configured as more even competition. It is an intangible, almost incapable of being defined. But you most certainly know them.

I do not find room for this in the philosophy of reputation as a carefully coded and ranked commodity, one that might be exchanged at will between accounts or husbanded for momentous occasions (events one must note even the most persistent of worlds are so often sadly lacking, as others more insightful than I have commented with phraseology such as “the lack of history”.) Until you provide me with a reputation system that takes into account these sorts of intangibles, the momentary flashes of dark and all too human psychology in simulacrum of extremis, I shall not place no faith in it.


How to avoid flame wars in reputation comments.

1 - make it a bit of work to go view the comments. In my example they were on a web page. They should not be visible until someone asks to see them, especially in a raw "all comments" format as that would likely be extremely long.

2 - make it easy for people to pick and choose which comments they will see. Let readers sort the comments by lowest first, highest first, view just those from their own friends (what do the people I know have to say about this person?) Especially make it easy to make sure no one on someone's ignore list's comments would show up by default. Eventually you could get most of the most egregious flamers onto your ignore list (or ignore-comments list if you didn't want to ignore them in game).

3 - Each person only gets to make one comment per other person. So there couldn't really be a flame war as it would be just a single flame. They could keep editing the comment to say additional inflammatory things if they wanted. But you'd be able to click the link of the flamer and see whether they flamed everyone or just this person -- whether they were perhaps distrusted by this person's associative group, or vice versa -- the entire feud could be really compelling reading. I suppose you'd have to put in your EULA that you were not responsible for the contents of others' comments.

Another idea...
could comments be subject to rating (vaguely like slashdot metamoderation)? Without an identification of /who/ the comment was commenting on, list two comments each time someone logs on and ask them to rate the comment for how appropriate it is. This way the worst flames and bashes could be downgraded aggregately. You'd wind up with some kind of community standards, anyway, for what was acceptable in a comment vs what was not.

In addition, allow users to report comments if they are against the EULA, for example if the comment contains racial slurs, in a similar way as they report the same thing said in game, with the same consequences.

Any comments on my suggested system for letting users log onto others' characters? It is something that annoys me in Everquest, that you can't tell if it is the person. Most people have conflicting desires to let their friends utilize their characters' necessary or very important abilities, and to keep it clear when it is really them vs when it is someone else playing their character. For example it would be a great improvement if you could not see guild chat if you had no characters of your own in the guild of the character you'd logged into. I know the game designers don't want people to share their passwords, and would prefer it was always the same person playing the same character, but there are other pressures preventing that from always being the case -- and it would help with the problems that ensue if the code would simply support sharing character access *with logging of who accessed at what times* (and if possible what they did) as well as letting others know with a tag that it is not the actual character owner *and who it actually is in some cases*. Have any MUDs ever had a system like that?


Ted Castronova>Saying 'You dummies obviously haven't spent your life in my area of expertise,' is not collaborating.

Neither is saying 'You dummies know nothing of your own area of expertise. Let me help'.

>What did you guys come up with? I can (and did) share with you what the game theorists came up with, and I didn't do it by rolling my eyes and proclaiming your incompetence for never having read Nash, von Neumann, or Morgenstern, let alone Fudenberg and Tirole, Kreps, Binmore, Schelling or countless others who have done this issue to DEATH in the areas I have studied.

You, the expert, see a bunch of non-experts stumbling around blindly, and help them out. This is good for both you and the non-experts.

However, had the non-experts decided to help YOU out with their thoughts on the subject, maybe then you WOULD have rolled your eyes?

Is Dan Hunter the expert or the non-expert when it comes to reputation systems in virtual worlds?

>Do you have any idea how many times I have gasped when reading commentary from game developers about how their societies work?

I think we all gasp at the ignorance of game developers from time to time..!

>But it makes sense: their gig is game design, not political philosophy.

So if they were to tell an audience that included political philosophers what they thought, as if what they thought was at the forefront of current political philosophical thinking, might they not expect those political philosophers to respond less than enthusiastically?

>For the first time in history, we have people who don't just study social worlds, they build them from scratch. That's really exciting!

I know - and from the developers' point of view the fact that all this body of knowledge is out there to help them is also exciting. It's one of the reasons this blog is compelling reading.

>Leave that to me, and I'll leave developer expertise to you, and we'll all be happier!

This is precisely why I was sarcastic to Dan. He isn't leaving the developer expertise to the developers; he's writing like he had the expertise himself.

Maybe it's just my misreading of the tone of his post...



Is it just me, or has the average temperature of this blog's environment gone up by a power of ten in the past week/day?

Putting a reputation system in is going to flag any achiever's attention, as UO demonstrated. If there are numbers on a scale, someone usually gets around to trying to max their value. (I don't know UO well or its history, but I somewhat suspect that some of the labelled griefers were actually achievers that happened to be involved in the same activity.) It's not possible to have a meaningful (computer-mediated) reputation system without someone trying to max it out.

That said, it's possible to minimize that, I suppose. Perhaps an off-center bell-curve, where those who DO max out the reputation number actually lose repute? (It would take one more notch of dedication to stay in the actual max range, rather than above it. That might make the difference in filtering out the worst of them. On the other hand, it becomes a question of whether they're maxing out the number or their popularity.)

Personally, I would agree with the eBay/Amazon/Slashdot method of giving the data to the players to interpret it how they wish. That makes it so that the players' gut feeling, seconded by data, can be the prime determinant. Short of National Identity Cards, I see no way for computer-mediated reputation systems to work.

-Michael Chui
Thinking up the Wild and the Crazy


Raph> Cory Ondrejka's comments on the use of negative reputations in Second Life

I apologize for the lack of clarity, but SL doesn't serve as an example that negative rating schemes devolve into chaos. As I said, outliers in both directions have reputations that pretty accurately reflect behavior. However, whether or not to use negative ratings is currently a point of debate within the community (ie, is a positive only ratings sufficient) with newer users more comfortable giving negative ratings. So, we may serve as a proof point if the increase in negative ratings causes a downward spiral within the reputation system, but currently that is not the case.



Cory, I wasn't actually offering 2L as evidence of spiraling into chaos, since you have both positive AND negative ratings in the game (in fact,m you have quite a nice implementation of positive reputations on multiple levels).

However, based on the mathematical models, you should expect to see negative ratings being used as offensive weapons, abandonment of identities to clear negative reps, and a host of other such side effects. Whether those issues become a make or break issue for having the negative reps is really up to you guys. My personal take is that it's probably not, based on experience & what I have read.

In addition, I'd note that there's a common pattern of participants in a positive rep system with a rating scale defining low ratings to be "negative" ratings. For example, eBay or SWG's forums or Slashdot, where you can rate from 1-5, commonly assume ratings of 1 to be bad, even though it's a positive-only system and 1 is supposed to mean "mildly good." So a rating scale will devolve to a positive/negative system even in the absence of explicit negatives, rendering the need for explciit negatives even less apparent.


Sure, so we fight back in the usual way of tracking users, credit cards, IP/MAC combinations, etc etc. I think that we'll ultimately make a very interesting test case as the early adopters become more and more of a minority.

What is interesting to me on a purely social sense is that the desire of many of the users to make the system positive only by negatively incenting negative ratings. Is this common to early adopter populations?



Well, come to think of it, I have tended to see it in early adopters, but not in more mature populations. But I think it's a size-based thing as well. Peer pressure works so well in smaller groups that there's not much call for explicit punishments as they end up seeming excessively harsh.


As a common man with interests in VW I ask the following question:

Can current real-world reputation best practices be coded in VWs?

Example #1: Financial networks such as banks and exchanges

Some of the identities I can think of are: unique IDs, trusted-node mechanism, external third-party verification, insurance, brand identity, etc.

Not sure how banks work, but they sure have a good system at minimizing counterparty risks. However, I think external third-party verification/insurance seems to be a key factor.

Example #2: Licensed Professionals such as lawyers (aka modern guilds)

Modern professional bodies view their “guilds” as a brand with unique identities that they work very hard to maintain. You have to be certified to earn a license and you have to re-certify regularly to maintain the license. There are well-established government-enforced or self-regulated complaint resolution system and other practices to uphold the reputation of their “guild.”

So based on Raph Koster excellent technical overview of the issue, I think it is best to just provide as much data as possible for players to create reputation maps/guides/reviews. Let the players decide how best to use them. If players have the ability to quickly compare and contrast the different maps without going into Excel, so much the better.

Word of mouth is good. Ways to increase the velocity is better.

Apply this idea to Second Life, it would be something like a Lonely Planet Guide to Second Life or one of PCMag's great feature comparison tables.

Happy Holidays,

Frank H. Lin


Michael Chui wrote, "Is it just me, or has the average temperature of this blog's environment gone up by a power of ten in the past week/day?"

They said people were too friendly at the "State of Play" conference....

Anyway, I'll try to take a non-flaming position on the flaming. :)

Part of the frustration Richard undoubtedly feels is that of deja vu. Every so often you have a new group of people (MUD admins, MMORPG developers, outside researchers) coming along and striking up the exact same topic. I'm sure if someone wanted to dig through Usenet you'd find a reputation system discussion that pre-dates the MUD-Dev discussion.

The veterans are simply getting sick of every new group coming along and acting like their discoveries and ponderings are some shocking new revelation when in reality it's old hat that's been discussed many times in the past year. Richard, being the most experienced of us, is showing his frustration at seeing practically nothing new under the sun for the past 30 years. Jessica Mulligan expressed the same thing in her last entry (for a while) on her "Biting the Hand" columns (http://www.skotos.net/articles/bth.html). I'm beginning to feel the same way even with my relatively meager decade of experience with online game developement. These are the same topic discussed in the same ways only by different people. The names change, but the topic stays the same. This is a terrible shame, because it means that people feel the need to reinvent the wheel before we can talk about more advanced topics and advance the state of the art.

Further, the virtual world designers work in an industry that is proud of its ignorance of history. I've heard more than one development team working an on upcoming title proudly declare that they've never played a text MUD, and that text MUDs are irrelevant to the development of "real MMORPGs". Such ignorance is just devastating, especially since it comes from within our own ranks. We're also the only medium I know of that takes almost no steps to preserve our history. Ancient games that delighted previous generations are being lost because of the inaction of developers. This is true for single-player games as well as online games; I can't accurately go back and play the original NWN on AOL, just as it's not easy to load up some of the Apple II games I enjoyed as a kid. (Ironically, it's the oft-maligned emulator community that's ensuring that these games don't fall into complete oblivion.)

As Raph pointed out, it's especially frustrating because the virtual world designers HAVE to read a lot of basic stuff in many areas in order to do our jobs competently. I was blessed with a Liberal Arts education, so I have a reasonable foundation for a lot of virtual world issues, myself. I took classes in economics, history, mathematics, statistics, etc, etc, etc. Hell, I'm *still* trying to expand my knowledge on the history of things I have to deal with on a day-to-day basis.

So, when an experienced developer tells you to "RTFM" in response to a topic, they don't mean it in spite. They're just trying to spare themselves the same discussion they had seemingly countless times over the duration of their career.

Oh, and I'm putting this next bit in bold for emphasis:

Anyone on here that hasn't read Richard Bartle's book Designing Virtual Worlds should stop posting, stop even reading this blog and go buy it, read it, and learn it. This book describes the minimum level of competency you should have when discussing design issues for virtual worlds. It's also an excellent book and very readable.

(And, no, I don't get kickbacks from the sales. I think it's really that good.)

Richard wrote, "I think we all gasp at the ignorance of game developers from time to time..!"

Hell, I gasp at my own ignorance when I go back and read some of my earlier notes on various topics. As I grow more informed and more experienced, my views change. But, that's a good thing; it shows that I'm still learning and that I'm not dead yet. ;)

In short, I read this blog because it strives to show a level of intelligence above and beyond most normal online chat boards. I think that the people on the design side of the fence would just appreciate the researchers making a little bit of effort to catch up to what we've already done. Especially since we have resources such as MUD-Dev and Richard's book, it's not hard to get up-to-speed on things.

Okay, now to attempt to be marginally on-topic:

One interesting thing about informal reputation systems is the default reputation that people initially assign people. A lot of times in Meridian 59, the PvP-focus of the game encourages people *not* to trust other people. If the last few "newbies" you helped were just alternates of sworn enemies that were building characters to later attack you, you're going to be a lot more jaded when it comes to helping other newbies. This ties into Raph's post about the use of reputation systems by newbies. I think it's also interesting to look and see how experienced players use the systems to rate new people to the system, even an informal word-of-mouth system in a smaller game like M59.

My thoughts,


The ‘temperature’ here has been pretty high, and no amount of backpedaling or explanation can change that. This post:

Richard Bartle> Congratulations. You have now reached 1997 in the MUD-DEV archives.

...was simply rude. Nothing else can or need be said about it. Several posters in another thread have repeatedly referred to one person’s work as “crap”. Again, simple rudeness. The rules at public fora like this are quite simple, and they apply to expert and novice alike. If you have something to contribute, then contribute it. If you don’t like what you’re reading, then don’t read it. And unless “1997” is meant to constitute a citation, the above post can hardly be considered a ‘contribution’.

I’m going to invite more criticism by saying that I’ve read many hundreds of mud-dev posts, and to be honest, I don’t think it’s that great. And it’s not that there isn’t lots of good information there, it’s that the information is ‘presented’ in an extremely awkward format, spread diffusely across hundreds of posts, and buried amongst material of much lower quality. (One can reasonably argue that this misinformation has value of its own, but that doesn’t make the process of assimilating it any easier.) What’s been missing are summaries, position papers, and books. Newsgroup posts and blog entries (including the ones here) can’t even begin to replace formal, comprehensive presentations of the material. That is perhaps starting to change (cf. the recent New Ryders releases, including Bartles’ book; the academic work of those behind this site), but there’s still a lot of work to be done. So I suggest that if Richard Bartle is concerned about the fate of the reputation knowledge on the mud-dev list, he should organize and summarize it himself (as he has done in the past). If the work is as good as his famous ‘Players Who Suit MUDs’’, the subject will benefit greatly.

So that *I* can be marginally on topic, let me point out that the reputation systems being discussed here seem mostly to focus on solving the reputation problem for veteran players, but what about the newbies? I imagine the unattainable ‘perfect’ reputation system would work for all players, including those who just started. I realize this is a bigger problem than game developers perhaps care to address, as its implications and certainly its solution extend well beyond the scope of MMORPG, but wouldn’t this greatly benefit player retention? (Or at least, trustworthy player retention?)

Raph Koster> ...a rating scale will devolve to a positive/negative system even in the absence of explicit negatives, rendering the need for explciit negatives even less apparent.

I think this phenomenon makes explicit negatives all the more necessary. Think about it: if the game community comes to think of a low score as a ‘bad’ score (even though it is meant to convey something else), then won’t all newbies be perceived as untrustworthy?


Looping back to Dan's original musings about software coded ratings versus guild monitored ones, the issue is whether guild-based evaluations would protect game players any more than professional societies protect consumers of medical, legal, or accounting services in the r.w. Consider the notorious cases of nurses and doctors who were passed from hospital to hospital and allowed to continue their crimes and/or negligence because their professions "circled the wagons" to protect them. If guilds cross over from one game to another, which is certainly feasible, would we not run into the same type of problem? What if the healers formed a guild which encompassed all the virtual worlds in which such characters exist? Would membership in the guild necessarily confer a greater sense of trust than a software-mediated rating system using input from "patients"? Is instead the answer to have both a guild system and a software-mediated rating system? Are there some synergies between the rw and the vw, so that each can learn from the other?

Have a Merry Christmas, Happy Hanukkah, Kwanzaa or Bertrand Russell Day etc. etc.


Jeremy> Newbies are untrustworthy by definition. :) Also, I think I addressed the issue of "can it be useful to newbies" as one of the chief flaws with trust metric systems... this was the main reason why we abandoned implementing a system on SWG.

On the point you make about MUD-Dev... It's a discussion list, not a book, of course, so yes, it is disorganized. The Laws were a direct attempt to collect just some of the one-liners from the archives, but there's a ton of material that has been too large to collect in that manner. Similarly, my entire website is nothing more than gathering up posts from MUD-Dev and elsewhere where I felt that I was actually saying something moderately smart.

There's actually no shortage of academic papers, and hasn't been for at least a decade. I have links to some of the repositories on my website. The main difference is that now the papers aren't being written solely by undergrads or grad students, but by professors; the second big difference is that some of the papers are tackling fresher subjects these days, often with empirical data from samples larger than one mud's worth.

Magicback> Those ideas are great if someone is going to do research. But what's really needed by users is a distillation where they don't HAVE to do research. People don't do enough research before buying a car or house, much less a virtual sword. And you don't have time to do research if someone walks up and says "wanna group?"


Raph Koster> Newbies are untrustworthy by definition. :)

Hmm... I could take that either of two ways, and -- well, they’re probably both correct.

Raph Koster> Also, I think I addressed the issue of "can it be useful to newbies" as one of the chief flaws with trust metric systems... this was the main reason why we abandoned implementing a system on SWG.

You mentioned it, I thought, as a limitation of the networking model, but none of the other ideas I’ve read about here seem to address the problem either. (Please, won’t somebody think of the newbies?) Perhaps it’s not even a good idea. I can imagine some people (Richard Bartle, perhaps) arguing that such a system would injure the fiction of the game world. Though I can imagine others (Jessica Mulligan) saying ‘too bad’ and ‘good riddance!’

Raph Koster> There's actually no shortage of academic papers, and hasn't been for at least a decade.

I suppose I shouldn’t be too quick to discount the existing literature, since I’m particularly familiar with only one part of it. However, in my area (economics), I know of just four or five serious analyses (as opposed to commentaries, or even intriguing but ultimately incomplete musings), and most of these were written in the last two years. Perhaps I’ve set my standards too high or too narrow, or maybe this is what a new science looks like: discursive, nebulous, and uncertain. And you’re right; mud-dev is not and cannot be a book. But that’s exactly what makes it so hard to use, and what ultimately limits its value. So please, let’s have more books!


(There should be a 'smile' pseudo-emote at the end of my first paragraph there. I put it in angle brackets, not realizing this was an HTML system. [sad face])


Raph Koster> Those ideas are great if someone is going to do research. But what's really needed by users is a distillation where they don't HAVE to do research. People don't do enough research before buying a car or house, much less a virtual sword. And you don't have time to do research if someone walks up and says "wanna group?"


I don’t have The Answer, but I have a few ideas from RW (pennies a dozen). These ideas are not new; they are just a view from a different angle. So, be gentle in reading the musing of a layman with only LARP experience.

1. Support the supernode
In White Wolf’s Vampire universe, the role is called The Harpies. In Hollywood film development, there is an influential bulletin board that kills or hypes a film in early development.

In-game design: Formalize the informal role of “information agent” with a profession or a craft. The mastery of this profession or craft increases the access to server-collected aggregated player behavior stats. This info allows them to impart second-level analysis for good or for ill. “We’ll group with you if you get the seal of approval from Talon Karrde. He’ll do a background check on you.” The role is in-character, but the design is not efficient for most VWs.

2. Increase the velocity of network node statistics.
Just make more character stats easily avail. # of kills, # of deaths, # of dropped links, # of groups joined, # of unique characters grouped with, # of unique characters with positive feedback, etc. Players will pick their own metrics and make their own judgment. “Bob want to group? Let me check up on his profile. Hmm, I like his stats. OK, he’s in.”

3. Facilitate the exchange of personal network node statistics
Make available tools for players to document their personal experiences with each node (AFFA, Brask) and allow group of players to aggregate their collective experiences such as an “our guild view on other character and guilds” schema (Dan Hunter).
The first idea, the supernodes can do the distillation for the VW. The second idea, the players will have the info to do their own distillation. The third idea gives players and guilds tools to exchange their network maps.
I think the second idea is the easiest to implement. With some nudge, players will make use of the information to create a niche for themselves in-game and out-of-game. Issues raised by Greg Lastowka may be mitigated by making the exchange of personal behavior stats a reciprocal exchange. Then Edward Castronova will have stats of the character’s “history” to help him make his judgment in Horizons.

One probably will need to get at least 30 stats to be will diversified and balanced against gaming and other non-systemic behaviors. It's like training a neural network to find the strongest links.

Else, people will have to start paying bail bonds/insurance/escrow to ensure their trustworthiness.



Just stumbled across this discussion. Here's a paper I wrote with some folks at Microsoft that deals with a passive reputation system for people in Usenet newsgroups. Take a look if you're interested (warning, PDF!).

The comments to this entry are closed.