I've been playing the hell out of a gem of an online game called "League of Legends" for the past few months (description of the game and my connection to it below).
On Friday, the developers posted an announcement on their forums about a new community policing process. In short, reports of griefing will be automatically forwarded to review to other players, ostensibly randomly. Players will be given briefing materials on the case and then asked to vote for punishing or pardoning the player. Those who vote with the majority will be awarded game points.
My mind boggles at this on several levels.
Has anyone ever seen automated policing like this? Will it work?
Full disclosure: I've been working with the developer on a research project in which my students deployed a large-scale (n=20k) survey of LoL players, matched with anonymized play data. NDA in place. Papers coming out later this year. Major kudos from me to the studio for supporting the work and enbracing user research.
For those who haven't played it, LoL is a hybrid title described by some as a 30-minute MMO. It combines elements of leveling in the RPG sense, tower defense, PvP, and team arenas, and is a direct descendent of the crazy-popular Warcraft III map "DOTA."
"gesellschaft", not "gesselschaft"
Posted by: unwesen | Jan 17, 2011 at 14:49
Well... user forums have been "solving" customer service for more than a decade now. I can count the number of times I've received help on a Microsoft product from Microsoft itself on one finger (guess which one?). Most tutorials I've used on a variety of products have come from the community.
This is more like solving community policing. I think it sounds like a great idea. Not sure how you'd "game" this system, though, if the players on the jury are random. You grief, someone complains, details are provided to the random panel, they adjudicate, you get thumbs-up or down.
Seems like you could also use this system to discourage petty or frivolous griefing reports. If you accuse someone, and the panel says, "No... t'weren't griefing..." You get hit with some minor negative stimulus.
I love it. The game of justice!
Posted by: Andy Havens | Jan 17, 2011 at 15:54
I like the idea a lot too. I'm just super curious how it'll play out given the details.
The "gaming the system" part would be if adjudicators (is that the right name?) would be trying to guess the majority vote rather than deciding based on the merits of the case. At its worst, it's a test of perceptions of others rather than substance. Then it becomes "how tough are people?" Maybe it'll be fine, but that crowd-based mechanic stuck out to me. In jury duty, we get paid (ha-ha) for coming and doing the right thing, not agreeing with others. When we get lazy and agree to get things over with you get "12 Angry Men."
IANAL or a jury-selection expert, so I'd love to hear one weigh in with how this will play out.
Posted by: Dmitri Williams | Jan 17, 2011 at 16:00
I believe a system on these lines has been used at BoardGameGeek to vet postings for a number of years, with good results. The incentives for the panel members are to align with perceived general opinion. The weak point comes if that perception points somewhere the site's/game's owners did not intend.
Posted by: JamesS | Jan 17, 2011 at 18:26
"A Tale in the Desert" (ATITD - http://www.atitd.com/) is a niche game that exists since 2001 (I think) and has a kind of player policing.
Every once in a while, there is a vote in three rounds. There is a number of groups with 7 players that have to vote in eachother after 48h of debate. The winner of each group moves to the next phase. 2nd round repeats and the 3rd round is an open global vote on the 7 candidates. The winner becomes "demi-pharao", granting him (amongst other things) the right to PERMA-BAN 7 other players.
Demi-Pharaohs end up being used as game police. If there is a major problem, players complain to them and ask to intervene. But there is a tabu on ban usage and as such I believe it was only used 3 or 4 times in all these years. The voting ends up picking the people with the best game reputation that will give assurance they will "never" use the ban.
Just to show that there is "automated policing" around ;)
By the way, that game also allows for player-written laws to be voted and then the Dev implements them.
Disclosure: I played the game from 2004 to 2007.
Posted by: HappyFather | Jan 17, 2011 at 19:17
I think Richard Bartle raised this point during one of his recent online lectures. Wasn't Everquest sued for using this exact strategy, because it constituted an indirect source of income for the players? I can see how small niche communities like ATITD and Achaea, which even outsourced content creation to volunteers, can get away with this, but LoL, being a huge player, is bound to fall under the watchful eye of some lawyer.
Posted by: Sir Knight | Jan 18, 2011 at 14:38
One of the problems with the idea is if the community has low standards for acceptable behavior, the judges will continue to uphold those low standards. For example, if the community believes that it’s okay to call people ‘bitches’ and ‘retards’ then the player-judges probably won’t issue reprimands for players in those cases, because those terms are acceptable in the community. The problem with the majority verdict notion is if the majority’s standards are poor, it doesn't solve the issue of making a community any less of a toxic place. Given this, it would be difficult for a game to improve those standards and have their community atmosphere rise above that low community standard. There are probably minimum standards that community judges have to adhere to (i.e. players shouldn't call each other 'n*****'), but there are nuances in human interaction that judges may let slide because a majority of them simply don't think that, for example, casual sexism (i.e. "You play like a girl!") is all that bad of a thing.
Posted by: Account Deleted | Jan 20, 2011 at 16:15
Here's a community based approach detailed in my book Building Web Reputation Systems from O'Reilly http://oreilly.com/catalog/9780596159801
From Chapter 10 - Case Study: Yahoo! Answers Community Content Moderation: [Wiki version here -> http://buildingreputation.com/doku.php?id=chapter_10 ]
"In summer 2007, Yahoo! tried to address some moderation challenges with one of its flagship community products: Yahoo! Answers (answers.yahoo.com). The service had fallen victim to its own success and drawn the attention of trolls and spammers in a big way. The Yahoo! Answers team was struggling to keep up with harmful, abusive content that flooded the service, most of which originated with a small number of bad actors on the site.
Ultimately, the answer to these woes was provided by a clever (but simple) system that was rich in reputation: it was designed to identify bad actors, indemnify honest contributors, and take the overwhelming load off of the customer care team. Here's how that system came about."
Posted by: Frandallfarmer | Jan 21, 2011 at 19:55
I think that this will be a great change, cause so many people on LoN are just plain jerks.
Hopefully people are responsible with their "griefing decisions" or else there will be a new way to grief people (by getting them flagged for griefing!)
Posted by: Cam | Jan 26, 2011 at 16:38