There's no solid evidence that violence in media causes violence in society, certainly not at the level that would warrant any kind of policy response. Here at Terra Nova, this has been discussed again and again and again and again and again. Yet the issue will not die, or, more accurately, a misguided conversation continues and at times certain points need to be reiterated. The immediate spurs to this post include a) getting an email about videogame violence effects from an undergraduate at another school, b) seeing one of Indiana's PhD students give a talk on videogame violence, and c) seeing media effects being debated at the International Communications Association meeting in Chicago this past weekend. Researchers continue to pursue evidence for a causal link between violence in media and real-world violence, and important people in the real world still think there's some sort of emergency.
Common sense objections to the agenda and the urgency are legion, best summed up here and here. Yet there are deeper issues, of a scholarly nature, that need to be addressed as well. Research in the field of media violence effects is generally ill-conceived, poorly executed, and result-driven. I have seen few things that I would describe as findings - results that become a permanent part of my view of the world and how it works. Before any more PhD students waste their careers on bad science, let's once again put the cards on the table.
To begin at the end: Scientific research should not be framed as the pursuit of evidence for something. To do so violates the important norm of disinterestedness. You are not supposed to care how the numbers turn out. The proper way to think of things is "What causes Y?" not "Can I find evidence that X might effect Y?" The Y here is violence in society. We know that the main causes of violence in society are parents and peers. A disinterested scholar would stop there. Yet in media violence research, the norm is to go looking for a link. One senses that in most papers, nothing would be sent to the journal until some evidence for the link was found.
How does one get that sense? This is the second major issue: significance. In scholarly writing, the term significance refers to a very specialized statistical feature known in most fields as statistical significance. It is a measure of the accuracy of a finding. It is also widely misunderstood and improperly applied. (How do I know? Training under econometrician Arthur Goldberger.) Look at it this way: You are the captain of the ship. The engineer comes and says that some rivets in the hull are weakening and are about to pop. Yet you can only fix them one at a time. Your first question is, what rivet is weakest? That is where the engineer should start. Oddly, in psychology and the social sciences, one insists that the engineer start working instead on the rivet whose weakness is most accurately measured. "We think rivet 12 is weakest, but we know more about rivet 34, so let's start there. By the way, rivet 34 seems to be pretty strong. [glub glub glub]"
In media violence research, it appears to be a universal practice that the accuracy of an effect's measurement is presented always first, and often exclusively. The size of the effect is considered secondary, if it is considered at all. In my experience of articles and presentations in this field, I have yet to see a sentence in the following form: "All else equal, a 10 percent increase in this measure of media violence leads to an X% increase in this measure of social violence." This is a very simple simulation of effect, and it seems never to be done.
Here's how the first two issues are related: If the research paradigm is to hunt for effects, and the standard of a "finding" is based on statistical significance, it is usually easy to produce the desired result. The nature of statistical signficance is such that if you mess around with the data set enough, eventually some set of controls and procedures causes the computer to pop out an asterisk indicating statistical significance on the media violence variable. This why the paper says "Although no overall media-aggression link was found, a link was found among children who identify with a violent character." Meaning, if you split the data into those-who-identify and those-who-don't, you find the desired link in the former group. In any reasonably complex data set, there will be some sub-group or some tweak that generates statistical significance. It's a mechanical thing in the end. And thus, when a researcher produces an entire career of papers showing the same result over and over, you get the sense that the disinterestedness norm is being violated. This scholar is not in the least disinterested. He knows what he is after and he is going to find it. The only way that disinterestedness could be restored in this field would be for scholars to forget about statistical significance and examine instead the real-world significance of findings, by means of these simple simulation sentences. Let's talk about the rivets that seem weakest. Assertions of real-world significance are not popped out by SPSS. They cannot be cooked. If media effects researchers want to be trusted, they should abandon statistical significance as the measure of truth.
The issue of significance goes beyond statistical significance, however, into the realm of policy significance. The media violence field gets its energy from its ostensible policy relevance. Yet the research questions are not framed in a way that is helpful for policy. The policy question is simple: If we regulate media violence, will social violence fall? But the research asks: If we expose this person to violent media, how will he act in the next hour? The latter is not relevant to the former. Or, there does not seem to be a good theory explaining why the latter is relevant. Yes, there are diagrams of boxes and arrows known as theories, but they are really just conceptual overviews, informal and heuristic, and cannot be used to measure or explain how a social effect emerges from a lab effect. As an example, suppose we use an Aggressometer to measure a person's aggressive mental state, and find that viewing Star Wars increases the Aggressometer by 20 percent. The question now becomes, if we show Star Wars generally in the public, we are generally going to have a 20 percent increase in Aggressometer readings. What theory tells me how this is specifically going to change the crime rate? I need to know that, because I need to evaluate policy in a common sense way. Keeping a million kids from watching Star Wars costs society $7m in lost entertainment value. Is the purported value of crime reduction more or less than that? A box-and-arrow diagram does not help. If the research is going to stay focused on the mind, we need a good theory to connect mind outcomes to policy outcomes - otherwise the research isn't relevant for policy and should be labeled as such: "Warning: Not For Use By Legislators."
Of course, the research could move away from the mind and frame itself where the policy questions live. There are some papers doing that; one piece by some economists and the longitudinal study by Huesmann, Aron and colleagues. These papers are worth of examination, because they state their findings in terms of the issues that motivate the research. But, of course, the findings conflict.
Why would they conflict? Why is it so hard to find answers in this area? Fuzziness. The media violence - aggression field has chosen to study two things that do not admit accurate observation. What is media violence? What kind of a thing is it? The policy debate seems to assume it is a continuous variable that acts as a gloss on a piece of media. Thus, you can apparently make a movie less violent by taking away an explosion. Similarly, what is aggression? It appears to be taken as some sort of negative gloss on a person, such that if you make them more aggressive you make the world a worse place. Needless to say, taking aggression and violence as separable from the whole entities in which they are observed is a fuzzy and probably fundamentally wrong-headed way to approach things. You could, if you wanted, study the relationship of dog's ears to the sounds of motors, but you'll never find solid evidence that dogs get happy when their people drive into the garage. You need to study dogs and people, not ears and motors. In fact the only reason you might study ears and motors separately is that you had some agenda to promote the motor industry by showing that it makes dogs happy. But of course, that wouldn't be disinterested.
I cooked up a silly example. Consider the following report:
"Textiles scholars have studied the effect of softness in cloth on affection. Children rubbed with soft cloth as opposed to scratchy cloth self-report significantly higher levels of affection and exhibit more affectionate behaviors (hugging teddy bears, for example).Responding to these findings, and acting out of a concern about the dramatic declines in affection in recent decades, the American Academy of Pediatrics recommends that children's exposure to soft cloth be maximized. The State of California has mandated that all cloth sold to minors must meet a minimum standard of SS+ (from the industry's cloth softness self-rating system). Unfortunately, the laws have been struck down as an improper extension of government authority, as stated in the 28th amendment ("Congress shall make no law abridging the freedom of the textile manufacturer"). Nonetheless, pressure continues for some sort of government response to the softness-affection crisis."
Ridiculous, of course. The PTA's insistence that school kids wear velvet boots would last one rainy day, and that would be it. But to be more specific about what's wrong here:
- The research deals with vague value-laden concepts, not objective observables.
- The findings are not disinterested. Somebody's looking for something.
- There is no evidence of a crisis at the social level.
- The pediatricians' recommendation to parents assumes thoroughly incompetent parents.
- So does the policy.
- The policy asserts an unrealistic level of measurement and control.
- The relevance of the findings for the policy is nowhere demonstrated.
- "Significantly" refers to statistical significance, not real-world significance.
There's not much difference between the cloth-softness debate and the media violence debate, unfortunately.
People and their Art are certainly worthy of study. But if we are going to be scientific about it, there are certain rules that must be followed. Following those rules might mean that some questions simply elude us. They cannot be answered in the way that Science-Capital-S demands. In such cases it is better to pursue other rhetorical strategies.
If you want me to believe that regulating violence in media would make our world a better place, you'll have to walk me around the world and through history, and help me to imaginatively experience a culture in which control of expression led to more happiness. I wander around in history a lot - it's been a hobby for decades - and i don't know of any such culture. Even fantasizing about the future, I am not seeing anything good.
In the end, I suspect that media violence research has been motivated primarily by aesthetic concerns. The Three Stooges are disgusting and vulgar, whereas King Lear is sublime. Why are we watching so much crap? Back in the day, you could make the aesthetic plea directly: Look here, you are watching bad art, and you shouldn't - just because it is bad. Today, aesthetic disgust gets channeled into sciency-sounding condemnations of entire media forms for their "effects." In our free-thinking age, no one can effectively change anyone's mind by asserting that Grand Theft Auto is simply adolescent, an 1812 Overture of bullying and nastiness, of low appeal. But because the age is also utilitarian, you can make the case that Grand Theft Auto has "bad effects:" like cigarettes, you say, its use harms others.
Comments on Media Violence, Aggression, and Policy:
"Research in the field of media violence effects is generally ill-conceived, poorly executed, and result-driven."
Tempest in a teacup.
This is true in any field, including, very prominently, economics. (One might argue, for instance, that economics-oriented research has driven -- and is driving -- more misguided social policy than media effects research ever has or will.)
I blame funding.
"I have seen few things that I would describe as findings - results that become a permanent part of my view of the world and how it works."
Upon reflection: This may be a good thing. Findings that conform to our "view of the world" are less often enlightening than comforting.
Human behavior -- along with our rudimentary notions of it -- apparently simply does not scale very well. The models of social relationships and processes that we carry around in our heads begin to fall radically apart not too far beyond the Dunbar number. Some push bravely on into the mass with non-Dunbar-number-limited maths and statistics, but sooner or later the bugaboos of irrationality, incomprehensibility, and other such incongruities come calling.
And, honestly, who needs causation anyway?
Correlation is more than enough to do the deed. Meteors fall. Banks collapse. Swine flu kills. LHC produces black hole. North Korea launches.
Doctor, doctor give me the news.
"Before any more PhD students waste their careers on bad science..."
Rather than trot out all these old and dreary arguments about this and that, here's an idea that "science" and "policy" seem loathe to consider: Burn, baby, burn.
Posted May 25, 2009 11:32:23 AM | link
I don't think this is a scientific issue, per se. Or at least it isn't an issue that the academe can tackle internally. The issue primarily isn't academics looking at data and making erroneous conclusions WRT significance or assuming causation when convenient rather than prudent.
The issue is threefold. First, and most importantly, there is an established and fairly compelling narrative at work among the press, policymakers and many individuals. It goes something like this "Violent videogames display violence graphically and glorify or downplay the consequences. Movies and TV do this as well, but the interactivity and gratuitousness of videogames is unique. Children subjected to these games will be more likely to have violent tendencies or engage in violent behavior" There are a lot of holes in that narratives and a lot of unspoken assumptions. I don't need to belabor the point here. What needs to be said is that despite those flaws and critiques, the narrative holds up quite well in public discourse. As "gamers" get older and become parents and grandparents (for the respective generations of gamers), this will eventually disappear, but we have to tackle the problem from this framework. Second, actually showing causation will be fairly difficult. As a gamer and a prospective researcher, the idea of a shared causal factor behind proclivity for violence and violent video games is compelling. Data are very sparse; relying on school shootings or threats of school shootings to track back videogame use is unappetizing. Survey data has more problems, especially so given the known narrative around videogames, prompting subjects to underreport play time. This gets worse with the fact that multiple disciplines with multiple methodologies study the same problem. Etc. etc. Third, and drawing mainly from the second point, refereed studies are likely to result in low values for correlation and low levels of confidence in those values. Their results will be, in a word, boring (you guys might prefer "cautious"). The scientists will likewise be boring (again, "cautious"). Dr. Phil isn't boring. Policy think tanks and politicians aren't boring (YMMV). Now this isn't the story of the pure as the driven snow PhD and the mean, mean newspaperman. There are (as Ted notes) a surfeit of academics publishing misleading findings (predominantly through errors of omission). I'll pick on Stephen Kirsh. He wrote this in 1997, but it is still fair game. Everthing seems to be in order. There is a control (of sorts). There is an experiment (of sorts) and attempts are made to draw out some distinction on the subject of his hypothesis. He concludes (fairly reasonably) that "violent" videogames increase a "hostile attribution bias" (a state where someone will attribute a neutral or ambiguous act as negative or hostile) in children. We only see Mortal Kombat compared to NBA Jam, not Mortal Kombat compared to pee-wee footbal or Mortal Kombat compared to 13 minutes of being teased in the hallways. More importantly (though he notes that this is out of the scope of his paper), we don't see any indication that the hostile attribution bias lasts or that repeated play impacts the hostile attribution bias. I would argue that these two critiques would be out of bounds save for the fact that Kirsh titles the paper rather too provocatively "Seeing the world through Mortal Kombat-colored glasses..."
Combined with these three problems is the issue of information and opinion search. We run into this on Wikipedia fairly regularly. The "best" sources are usually peer reviewed publications with as neutral a focus as possible. Trouble is, most of these peer review sources (on top of being boring, as noted above) have a very narrow scope, make incremental assertions on an overall topic and hedge their findings considerably. So editors faced with building an article on (say...) Video game controversies have trouble integrating the "best" sources into the picture. We try to look for bold (or at least substantive) claims to anchor paragraphs so we are left to source material from MSNBC or CNN or policy papers. When placed on the page like that, the arcane discussions about methodology, narrative and bias disappear. The same thing happens when journalists, policymakers and other folks outside the academe stitch together conflicting information about the subject.
I've given up hope for fixing this problem in the short term. When the Attorney General of the US is someone who played Halo in college, we will be fine.
Posted May 25, 2009 2:24:51 PM | link
Right on, Adam Hyland.
"We know that the main causes of violence in society are parents and peers. A disinterested scholar would stop there."
I've heard this complaint from a lot of people criticizing effects research and have never understood it. Those who maintain that there is a connection between violent media and aggressive behavior aren't claiming that there aren't other factors that contribute to aggressive behavior to a much greater extent, nor do they oppose measures to address those other areas of concern (e.g. social inequity). Why shouldn't researchers seek out as many contributing factors to anti-social aggressive behavior as possible? Why does seeking out one link and offering people the option of applying the findings to their own lives preclude the possibility of trying to remedy other social ills?
I agree with your critiques of policy implementations based on findings and think that it would be better if people were left to make of the findings what they will. Still, I have found compelling evidence in the research that exposure to certain kinds of extreme violence for certain populations results not necessarily in violent behavior but in callousness towards violence and desensitization (Linz, Bushman, Heusmann, etc). To echo Hyland, opposition to such studies seems like reactions to misinterpretation of findings by mainstream media or politicians (who are looking for causes for Columbine). Critics may be trying to decide whether the probability claims of researchers are warranted, which is laudable, but they appear to also be motivated by an uncomfortableness with the censorship that the findings might lead to, a career investment in other methods of media research, and a personal attachment to some violent media.
Ideally, violence effects research shouldn't demand that violent movies be banned, nor should it make claims that the benefits of media categorized as violent cannot outweigh the possible anti-social effects (plenty of great art would be considered too violent by today's standards). Still, presenting compelling evidence (of which there is plenty, and it has been moving steadily towards greater external validity) that exposure to violent media increases anti-social aggression or tolerance towards aggression is worth doing. When you've got millions of young people who accept violence in the form of war as normal or natural, shouldn't we do everything in our power to find out what every one of the contributing causes to that mentality are?
Posted May 25, 2009 10:51:23 PM | link
But surely in these times of economic woe, this is a good thing? It means we get more money for research!
Here's how it works:
1) Government really, really wants proof that X is true when it's not.
2) Government instructs funding agencies to divert funds towards proving X.
3) Attempts by scientists to prove X fail.
4) Government increases budget of funding agency, in order to encourage more resources be thrown at the subject.
5) Go to 2).
This means that scientists can milk funding agencies up until the point when someone breaks ranks for the glory of it, or there's a statistical anomaly at stage 3) that produces the "right" result.
Interestingly, were scientists to co-operate, the above would also work if the government wanted to prove something that was, in fact, true.
By the way, your example of finding the "right" result in subgroups was recently covered in The Guardian's Bad Science column, here. If you want to keep up with wacked-out media interpretations of scientific data (albeit from a UK perspective) it's pretty good. The author has even had a pop at the head of the Royal Institution for her views on computer games and social networking sites (views passed off as "science").
Posted May 26, 2009 4:26:08 AM | link
Just a small point, but is any scientific research "disinterested"? This to me is a myth of the ideal of "Science".
Academic research is motivated by a need to secure further funding. Commercial research is motivated by commercial concerns. The extent that the bias expresses itself in the research may vary, but these days little research exists without some motivating concern. So where is this disinterest supposed to be found?
(I mean this as a rhetorical question - at least for now!)
Posted May 26, 2009 8:10:50 AM | link
I think Chris' point is very important. There is no such thing as objective social science. The only difference is between people who recognize their own agendas and those who don't.
And I think the more intriguing issue is, as Richard Bartle pointed out a while back, that there is a fundamental disconnect between the praise we have for "good" media effects (exergames, educational worlds, VW training, eye-hand coordination) and knee-jerk disdain we have for "bad" media effects (violence, addiction).
I'm more inclined to believe that there are important behavioral levers afforded by VWs, and it's all about how they're harnessed. This isn't to say that I agree with the methods and conclusions of the media violence gang, but it is to say that if we believe that it is possible to create a game that promotes cancer treatment compliance (i.e., Remission), then it's logical to also assume that it's possible to create a game that promotes physical violence (although not necessarily because of the rationale and theoretical framework that pro-violence researchers currently use).
Posted May 26, 2009 8:17:04 PM | link
“Scientific research should not be framed as the pursuit of evidence for something. To do so violates the important norm of disinterestedness.”
Well, I guess that's it for CERN and the Large Hadron Collider. Damn physicists...always going off and doing shoddy science with faulty philosophical/methodological foundations.
Posted May 26, 2009 10:13:20 PM | link
Nick Yee: There is no such thing as objective social science. The only difference is between people who recognize their own agendas and those who don't.
I think this attitude is dangerous for both researchers and the public. It encourages the notion that social science isn't really science. (My wife, a physicist, sums this up by saying, "Economics is a social science, kind of like astrology.") Is social science different than physical science in this respect? Obviously, a researcher is usually interested in implications of their research, whether for engineering possibilities, policy implications, future funding, or just because they're geeks fascinated by the stuff.
So, whether in physical or social science, there's always a kind of bias in how researchers decide what questions to ask. But there needn't be any bias in how results are interpreted or presented. A good scientist who finds results contradicting their expectations will write them up just as the data lead, and wonder why their expectations were wrong.
Posted May 27, 2009 9:46:04 AM | link
The Bobo doll experiment showed that children observing adult behaviour are influenced to think that this type of behaviour is acceptable thus weakening the child's aggressive inhibitions. The result of reduced aggressive inhibitions in children means that they are more likely to respond to future situations in a more aggressive manner. Also important in this experiment is the result that males are drastically more inclined to physically aggressive behaviours than females.
Posted May 27, 2009 11:11:29 AM | link
Eh. I don't think we need to spin our wheels over the "disinterested scholar" debate. In my opinion it is just a false dialectic between two non mutually exclusive worldviews where exponents of each view treat them as mutually exclusive.
Posted May 27, 2009 3:35:19 PM | link
Science, technically, is a process not of confirming but of DISconfirming assertions about the observable universe. If, over and over, evidence does not dissuade us from a hypothesis, then we start to call it a theory.
Despite the truthiness of that definition, however, it does not reflect the reality that scientists face, especially social scientists. In economics, you won't see an American Economic Review article where the author reports that she was completely wrong. Published results get that way by being sexy and, one hopes, fairly robust. The incentives in social science clearly point to doctoring the data, lies of ommission, and accentuation of the positive to the diminution of the negative.
The only line of defense against the publication of inaccurate or shoddy research is comprised of the reviewers for the journal. With the proliferation of journals and a relatively short supply of good, circumspect reviewers on hand, it is no longer a question of IF bad research will be published, as it is a question of when.
Posted May 28, 2009 10:56:52 AM | link
I still think the main problem with this whole 'issue' of violence in the media leading to real violence is that it is based on people wanting to find that effect. Without any rigorous experimental studies having been done, there's just as much evidence to suggest that at least some forms of violent media actually reduce violent tendencies by providing a harmless outlet. But people want something to blame, they want an enemy they can rally against, and furthermore they want an enemy they can rally against without feeling bad about it, and for the most part they don't give a damn about scientific studies regardless of what concusions they draw. It's just so easy to blame violent movies and computer games, it doesn't take any further thought or bring up any confusing moral issue. Ban violent movies and computer games and society will become perfect. That's what people want to believe, because it doesn't take any intellectual effort to believe it, and giant machiavellian organizations have an interest in them believing it. The government and the corporations have a huge interest in not dealing with the actual causes of violence, such as drug criminalization, mental health issues, and ultimately a social philosophy of ignorance, greed and prejudice. So instead, they encourage us all to blame something they don't care about, and violent media turns out to be a good scapegoat. And like the sheeple we are, we just follow along with it.
Sorry to be so blunt and cynical, but yeah.
Posted May 28, 2009 11:42:16 AM | link
Was just talking about media violence today, actually.
The most cogent point that came up is that it is virtually impossible to isolate media violence in the relationship to later aggression. There's so much more at play.
The Bobo doll experiment was interesting, but at the same time, did the researchers go back two weeks, two years, twenty years later and look at the same issue?
That experiment reminded me very much of when I saw the Fast and the Furious several years ago in a high-quality theater. Immediately after the show let out, the teenagers were jumping in their trucks/cars and doing doughnuts in the parking lot. (and one notable crash into a lightpole) And yet no one - and rightfully so - suggested that they'll be doing the same thing in a week's time. There was a stimulus, and an immediate reaction, and that's where it was left.
Now, what I *do* think could be interesting is research into what's been considered side variables, like telepresence or addictive tendencies. There may be some grist to mill in these areas, but first academia's going to have to agree on common language and the idea that no social science theory exists in a disciplinary vacuum.
Posted May 28, 2009 4:46:04 PM | link
My greatest issue when beeing asked about computergames in relation to violence and/or addiction - is to give a straight answer. I would really like to say there is no relation, but my role as a representative of a scientific community - I cannot ignore that these things are beeing debated.
Its not right to say "All research shows, without doubt, that there is no correlation", cause it is not true.
Thus in one way I am prepetuating the ongoing debate, on the other hand if I ignore it I am just as populist or disregarding of the scientific community as those I argue against.
Posted May 29, 2009 5:03:49 AM | link
Ted makes a lot of good points here, but I don't think we should be looking to experiments to assess how large and important effects are. Experiments are designed to show differences across treatments.
Thus, the experimentalist designs two settings that differ in only one way (e.g., subjects in one cell play a game with violence depicted graphically, subjects in the other play the same game without the violent depictions), and then tests whether there are statistically significant differences in behavior across the two settings.
This is a great way to test a theory of how violent depictions affect attitudes, perceptions, behavior, etc. But it is a terrible way to test how large those effects are, because estimates of effect size will vary with any number of the features of the settings. To the extent that the lab setting differs from the real world, the effect sizes are likely differ as well.
Evidence of effect sizes needs to come from field/econometric research.
Posted May 29, 2009 1:41:53 PM | link
Timothy: "I think this attitude is dangerous for both researchers and the public. It encourages the notion that social science isn't really science."
Speaking for myself, I don't think the so-called "hard sciences" are any safer against this claim of disinterest. As an ex-astrophysicist I can tell you that very little that goes on in that space (if you'll excuse the pun) is "disinterested".
Take the Large Hadron Collider, that Keith mentions. He jokes:
"Damn physicists...always going off and doing shoddy science with faulty philosophical/methodological foundations."
Well, I'd say they do good science with faulty philosophical foundations. :) The history of science shows that you don't need solid philosophical foundations to conduct useful research. ;)
(Incidentally, I don't believe in the Higgs boson, and never have - it looks like a kludge to me, intended to save a neat but incomplete particle model. But the beauty of the particle accelerators is that whatever you plan to look for, you can find other things - social science has the same advantages).
Finally in this utterly tangential aside, Isaac says:
"Science, technically, is a process not of confirming but of DISconfirming assertions about the observable universe."
From where does one derive a *technical* definition of science? That sounds like Popper's take on the subject, but it's not the only choice on the table.
We keep making this philosophical mistake of thinking that science is a well-grounded, well-defined activity. But bizarrely it is neither. The tendency to mistake the successes in research and technology for the epistemological sanctity of science is quite widespread.
There are many competing conceptions of science, all valid as far as I'm concerned - up to and including Feyerabend's oft-misunderstood "anything goes" of which he noted: "'anything goes' is not a 'principle' I hold... but the terrified exclamation of a rationalist who takes a closer look at history."
Anyway, sorry to have hijacked your post for philosophy of science side discussion, Ted. :)
Posted May 30, 2009 3:08:49 AM | link
One thing that has not come up yet in this discussion is that "violent" stimuli is contextual, and difficult to label (like any emotional response) in the real world. Far more difficult than the murky business of "proving" the effects of violence in media is the job of then identifying sufficiently inappropriate content.
Any scientist worth his salt will narrow the the violent stimulus used in his experiment to something obvious, but does this then somehow prove that the entire spectrum of violence in media should then be removed? After a significant body of evidence has been gathered, before a policy can even be implemented there has to be an easy set of rules identify the unwanted content. To me this seems like the most difficult problem of all. Sure we can look at Oedipus Rex and say, "contains violent content: patricide, suicide, incest" or maybe we could have a "violence scale" of 1 to 10. Sounds great.
Ultimately what we should be researching is the effectiveness (or even the effect) of government policy on violent behavior. Perhaps then we'll know whether policy is even the solution to the problem.
Posted Jun 1, 2009 12:10:57 PM | link
In this debate I have frequently come down on Ted's side, as in: The side of those disgusted with telling journalists "There is no evidence of a connection" only to read "Media Professor claims: No connection what so ever," in conjunction with "Psychology Professor claims: The connection is obvious and I have seen it many times!"
Journalists want a yes or a no when they ask That Question, and "sorry, but nobody have really found out what causes people to react with violence" is just not good enough, no matter who they ask. Which means: you know, those people we appear to disagree with? We might not disagree that much, after all.
When that's said: We need to pause and consider the changes games are causing to our general behaviour, our social networks, our everyday habits and our lifestyles. Not all these changes are equally healthy, and society does not have good ways to deal with them - at least not yet.
Posted Jun 9, 2009 5:01:04 AM | link
Media violence have a direct negative effect on youths.That is the truth so lets accept it.
Posted Jun 13, 2009 9:39:37 AM | link
Has someone said this already? My committee felt like I needed a balanced perspective. Someone whom I am advising at the moment has been told the same. For me, it meant that I used up a lot of space regurgitating the arguments, as well as the responses to them. I'm honestly up at night sometimes thinking that I didn't do enough work of my own with my data, contributed nothing new, etc. etc. But my supervisors reminded me repeatedly that the diss is only the beginning of an academic career, and that context needs to be set for all of the readers who don't reside in our lovely game studies bubble.
I have lots of new data in my diss, hopefully soon to be distributed! I focused on impacts, I suppose I'd say, but in a pro-social way. So topics like communication, team-work, cross-cultural issues, etc. However I spent so much space being balanced that it honestly took away from the rest of the thesis. It appears that this is mandatory, though: can't say something positive without making it clear that it is in relation to what everyone knows to be true: videogames are bad, bad, bad. Even the super cool Obamas have mostly banned them from the house.
Posted Jun 23, 2009 9:58:15 PM | link
@GreenKey: Yeah, what pisses me off is when people get on radio or tv spouting their expert opinions without ever having played a videogame. I am not making this up.
Posted Jun 23, 2009 10:00:08 PM | link
@Lisa G: Baroness Greenfield is guilty of this.
Posted Jun 24, 2009 9:17:20 AM | link
It really depends on us adults how to train our kids when it comes to video games. When I read this article:http://www.goarticles.com/cgi-bin/showa.cgi?C=1421576
I was given a new outlook towards video games. It's really interesting to think that video games have good effects as well.
Posted Jun 26, 2009 1:30:21 PM | link
Bahh Bahh lol
Posted Sep 21, 2009 5:24:42 AM | link