« Is Everquest II a Place of Public Accomodation? | Main | Mapping finalized »

Mar 15, 2010

Comments

1.

It's a shame people are still submitting this stuff.

Do you get the feeling the authors seriously believed what they were saying, or were underestimating the intelligence of the review panel and thought such "popular" hooks were going to resonate?

2.

Oh, don't get me riled up, Ted!

By the way, dissertation is signed off, and is very NON-effects oriented, though I did have to write a section about why I choose to be a techno celebrationist rather than taking the obvious stance that videogames must be bad for us. I do not concur. Don't know why I am such a deviant. Too many trips to Burning Man, perhaps, or the steady diet of Star Trek my mom fed me.

3.

Rule #4: Understand causality, or lack thereof. This from the headlines this week... Internet abuse and depression LINKED!

Just because overuse of the Internet, for instance, is linked to depression - it does not mean that the Internet makes people depressed. It is more likely that depressed people turn to the Internet for comfort, and in fact there is much data to support this point of view.

4.

Chris, I've been daydreaming about writing a book - after I retire - about people in the academy. Just a memoir, you know, just the accumulated experience of a career. So why does it happen? I don't know. Some guesses?

I there's a really widespread misunderstanding of statistics out there. It's not specific to videogame effects researchers. "It is uniformly agreed that statistical significance is not the only consideration in assessing the importance of research results. Rejecting the null hypothesis is not a sufficient condition for publication...Statistical significance does not necessarily imply practical significance!" My sense is that there are entire fields that don't work very hard on the difference between a finding that some piece of software says is important and one that is truly something that the world needs to know about.

Then there's the natural desire to be important. (I have that one in spades!)

Then there's agendas. "I need to show this is in the data because I know it is true in the real world and we need to change it, now!"

I guess the absence of rigor in the interpretation of results is the key problem, because it would correct people who pursue importance or certain outcomes. It would be better if this was done in-house, that is, by people in the field. As it is, we have to have this external debate, where specialists take sides and ultimately the courts and neutral observers decide (as they have with media effects in general) that there's nothing going on there.

I have an idea about how to improve things, though. Instead of studying media effects in general, why not study the effects of specific policies? OK, so you don't like violence or sexism or nerdity or whatever in a video game. Well, what, *exactly* do you propose we do about it? Give us a policy idea. And then prove to us, using data and cost-benefit analysis, that it makes sense.

It seems to me this would focus media effects research on a something really helpful, and at the same time give researchers a better sense of what they need to accomplish in their research. Right now, they are just looking for whether something is "significant." Well, who knows? It's so general. So vague. With this new paradigm, they'd be looking for evidence that a particular policy act will increase or decrease public well-being. That's a much more concrete, isn't it?

5.

Ah ha ha!

6.

it's a pity to see so many people trying to characterize a videogame without playing it.

7.

Do you have any suggestions of an academic dept who would be interested in putting a game through real cognitive testing (EEG at the very least)? We're working on a game that we think is producing an alpha/theta EEG state (comparable to deep meditation or neurofeedback).

Any pointers to someone to *validate* our impression would be grand!

8.

On causality VS correlation, here's a great article from Ars: We're so good at medical studies that most of them are wrong.

Statistical validation of results, as Shaffer described it, simply involves testing the null hypothesis: that the pattern you detect in your data occurs at random. If you can reject the null hypothesis—and science and medicine have settled on rejecting it when there's only a five percent or less chance that it occurred at random—then you accept that your actual finding is significant.

The problem now is that we're rapidly expanding our ability to do tests. Various speakers pointed to data sources as diverse as gene expression chips and the Sloan Digital Sky Survey, which provide tens of thousands of individual data points to analyze. At the same time, the growth of computing power has meant that we can ask many questions of these large data sets at once, and each one of these tests increases the prospects than an error will occur in a study; as Shaffer put it, "every decision increases your error prospects." She pointed out that dividing data into subgroups, which can often identify susceptible subpopulations, is also a decision, and increases the chances of a spurious error. Smaller populations are also more prone to random associations.

In the end, Young noted, by the time you reach 61 tests, there's a 95 percent chance that you'll get a significant result at random. And, let's face it—researchers want to see a significant result, so there's a strong, unintentional bias towards trying different tests until something pops out.

9.

...not to mention causation. For example, I have a thyroid condition. Hypothyroidism is common, and is often associated with obesity and diabetes. However, I've never yet read an obesity study that eliminates thyroid issues from their demographics.

Similarly, liver conditions -- often from alcohol abuse -- can lead to obesity. Haven't seen that eliminated but once.

So when you see that obese people are unhealthy, are you reading that people are obese because of their health issues, or have health issues because they are obese?

Game studies often hit the same brick wall. When I see that kids who play [fill in the blank] tend to be violent, I wonder if the games are making the kids violent, or if they are attracted to whatever game because of their taste in violence? Maybe if they were pacifists, they'd be playing Go or something?

10.

...not to mention causation. For example, I have a thyroid condition. Hypothyroidism is common, and is often associated with obesity and diabetes. However, I've never yet read an obesity study that eliminates thyroid issues from their demographics.

Similarly, liver conditions -- often from alcohol abuse -- can lead to obesity. Haven't seen that eliminated but once.

So when you see that obese people are unhealthy, are you reading that people are obese because of their health issues, or have health issues because they are obese?

Game studies often hit the same brick wall. When I see that kids who play [fill in the blank] tend to be violent, I wonder if the games are making the kids violent, or if they are attracted to whatever game because of their taste in violence? Maybe if they were pacifists, they'd be playing Go or something?

The comments to this entry are closed.