« Machinima Law at Stanford in April | Main | Medium Rare »

Mar 31, 2009



Robert, it was a pleasure to participate in the event. There were many great questions from the audience. A point of clarification: I made no statements about the folly of objectivity. I recall Celia, Tom, and I each going out of our way to state that both objectivist and interpretivist approaches will be crucial to social science research going forward.

It is clear from this statement that you no longer seem to hold the view of exploratory research as of a second order compared to experimental approaches, and this is welcome (though framing them here as qualitative vs. quantitative seems to miss the mark, as did your original opposition between experimental and qualitative).

However, given how a good deal of the discussion yesterday focused on how many non-academics (policy makers, business people) see in the very complexities and rapid change they confront the limits of model-driven, positivist approaches, I do not at all see this pessimistic picture as persuasive. (The frequent TN commenter Andy Havens, who works in the "real world" in the area of marketing, has made a number of illuminating contributions about this.)

Certainly there is bias against approaches that seek to analyze descriptively the world in all its indeterminacy, but in my conversations with those outside of the academy I see no reason to think it is overwhelming. On the contrary, it seems that the bastions of this bias are primarily certain pockets of the academy. Ironically, it is some of those who are least in touch with the workings of the world as it actually is that are most prone to this kind of narrow thinking. In this light, your comment in your closing remarks for the March 6 event, where you entreated people to encourage their "favorite anthropologists" to get out of their "armchairs" and study the world is doubly galling. Cultural anthropologists, interpretive sociologists, social geographers and other empiricists are the very fields that encounter the warp and weft of human life in all its messiness. The armchairs are occupied by those in other fields, and that is precisely the problem.

But even if your picture of overwhelming bias were true, to proceed from that to characterize spirited defenses of descriptively analytic research within the academic realm as a lost cause and a sign of defensiveness strikes me as the worst kind of surrender to popular views as well as constituting a denigration of such approaches. That hardly seems the mission of the academy, or that of anyone who aims to improve our understanding of the world.


Robert, it was great to have the event yesterday and continue these discussions, but I must say I share Thomas’s sense of disappointment. Your “prediction” makes no sense based on what’s going on out there in the world, and seems to be a normative statement clothed in terms of a hypothetical future. There is no reason why there cannot be more experimental research and more interpretive research. All evidence points to there being ample interest in both: in the differing kinds of topics and questions they address, and in the differing perspectives they offer on common questions too. Your “death knell” narrative is hard for me to understand given our previous conversations, and I see no convincing proof for it.

Despite your caveat that “I am not arguing that this is a desirable outcome, but it isn't a hard prediction to make,” I can’t help but wonder if there’s not an element of wish fulfillment in your post. After all, with your left hand you ring the “death knell” yourself, presuming an apocalyptic future for qualitative research, while with the right hand disavowing your own narrative as “flawed reasoning.” Methinks your right hand doth protest too much, particularly when you state that:

In cases when quantitative and qualitative methods go head to head--where the same research question is amenable to both methods--the quantitative researchers have a real advantage in persuading skeptics, getting funding and influence. While there is room for debate about whether this is the best outcome, it is certainly a defensible one.

Upon what data do you base this claim that this “real advantage in persuading skeptics, getting funding and influence” exists? There are of course ideologies out there claiming that only experimental methods or only quantitative methods are valid (often conflating the two, as Thomas pointed out). But as Thomas, Celia, and I stated repeatedly, this “real advantage” is not self-evidently true.

There is significant interest in interpretive and qualitative methods for understanding online communities in a range of fields, from nonprofits to corporations to the military. Beyond the topic of online communities, anthropologists have, for instance, been engaging in important debates about the ethics of working with the military or with corporations precisely because the interest in qualitative and interpretive approaches is so strong. As Thomas notes, the dismissal of interpretive or qualitative approaches seems to appear most often in the academy itself.

One way this takes place is to do what you in this post: refer to experimental or quantitative methods as “new,” when in fact all methods have a history and there is much that is very innovative and new in interpretive and qualitative approaches. Note how your “death knell” story sets up a zero-sum, linear narrative where one methodological approach advances at the expense of others:

I am concluding that quantitative research drives out qualitative research in a two-stage process…

…Economics and psychology are already in the final stages of this process, sociologists are losing prestige to psychologists as they resist the trend, and anthropologists are already starting down the path.

But is there really this single “path” or "trend," with discrete “stages?” Can’t there be multiple paths? Is that not indeed more empirically accurate? Are sociologists really losing prestige to psychologists, for instance? That’s an ideological claim but I find it hard to believe that is true in all cases. There’s plenty of good sociological research out there that works on topics that aren’t of interest to psychologists anyway, so I don’t see how more ostensible prestige for psychology entails less ostensible prestige for sociology. Methods can go head-to-head in a zero-sum fashion when there are resources involved—grants, faculty positions, and so on—but that is not inevitable and not the whole story. Often funding structures stimulate a zero-sum mentality that is in fact not necessary.

Your phrasing “quantitative research drives out qualitative research” anthropomorphizes research itself. If we put the agents back in so that we’re talking about quantitative researchers driving out qualitative researchers, we could ask: do quantitative researchers really want that? Are they so hostile to methodological diversity that a language of death is warranted and sensible? I’m sure such researchers might exist, but are they really so dominant?

Enough of the ranking on a path, of the talk of methods “driving out” other methods, of the talk of methodological partisanship—indeed, methodological death! You have been an incredibly generous and thought-providing interlocutor, but your “death knell” suggests to me that we are at the early stages of what will hopefully be an ongoing conversation. A conversation from which all sides can learn, not least that there may not be “sides.” A conversation that might even set an example for transdisciplinary collaboration beyond the topic of online communities!


Not that it really matters, but I think Thomas has done several rounds on this particular topic starting in 2006 or so. If you search through the archives for "quantitative" and "qualitative," there's a lot of spirited debate to be found.

I'm kind of surprised that it has come up again, because to restate my previous position, I'm for inclusion of both thoughts AND numbers into our edifice of knowledge. While it is correct that certain persons in certain institutional settings today are required to privilege quantitative over qualitative methods, quantitative analysis only gets you so far if what you want to get at is truth.

We can go back to Plato and find that empiricism, as a epistemological method, has certain limits. If the gist of this post is that the non-quantitative humanities (including philosophy, I take it) will soon be passé because they are losing their prestige value, then we should all agree that this is a dismal future for the academy. Any educational institution that abandons thick description and social theory because it is not "prestigious" fully deserves to lose all the insights that it provides. Personally, I really doubt that this is an accurate prediction of the future of our "prestigious" educational institutions.


I'm a little surprised at the idea that properly conceptualized quantitative and qualitative research should -ever- ask the same question.

The ontologies of both are fundamentally different: What the constructivist and what the neo-positivist believe can even be discovered by research shouldn't ever be the same. Indeed, the very function and form of reality for the traditions is in conflict.

On a more practical basis, Dan's right that quantitative research has a PR advantage insofar as decades of neo-positivist supremacy has conditioned the mass audience to accept percentages and bar charts as gospel. The knowledge created by qualitative is just not as easy to promote in this environment; "83 percent of people believe..." is just much easier to relate than "Themes were parseable at the third participant. Redundancy was reached at the eighth participant and thus the presented themes were believed reflective of the participant population."

And I won't dispute that there could be a death spiral for the non-neo-positivist traditions if positivism as the only method of (truth)making continues to be espoused in undergraduate coursework.

But I really don't think it's in the cards. Let's be honest here - the military complex is pretty good at setting research agendas. The adoption of the Human Terrain initiative in Iraq, while decried by some as a misuse of anthropology, is a real reflection to me that among a decently sized group of people that have a lot of capacity to change the direction of U.S. research, qualitative methods are gaining traction and there's some arenas where neo-positivism just isn't doing the job.


I really feel well out of my element here: I'm not sure what non neo-positivism is, and I have no real understanding of "the academy" or what happens in it. So I hope you'll excuse my wading in here a bit: I frankly feel, as I did at the forum the other day, like a kid who has stumbled in on an adult party who wants to look intelligent but doesn't understand what the heck everyone's talking about.

The experience I can speak from is from outside the academy. So again, this isn't about tenure or grants - it's about what I, as a business owner, and my clients, who own even larger businesses, might be looking for when they think about the studies of virtual worlds, immersive environments, games, etc.

Second, this is based on my own impressions of and experiences in virtual worlds and a number of game environments.

So I'll start by saying that very often, quantitative data is merely a 'cost of entry'. It's junior executive stuff. It's the sort of thing that brand managers may be trained to look at, and parse, and put into fancy spread sheets and PowerPoint presentations, because the ability to access and interpret quantitative data creates an intellectual rigor, a standard for measurement (ROI and all that stuff), and a shared framework for decision making. If every junior executive was running around making decisions based on "instinct", or on "qualitative data" we'd have more anarchy than we already do.

But this idea that studies or ideas need to be quantitative to be taken seriously is bunk - or at the very least it's junior level thinking.

Leadership isn't about being able to parse data really well, or to be able to access studies with predictive qualities, it's about creativity, extrapolation, outliers, pattern variance, communication - and being able to enact change even when the data might "predict" that that type of change is impossible.

In order to guide these types of activities, the leaders I know don't look towards well constructed formulas or exhaustive studies or whatever - they look to observational data, which is where the kinds of insights can be found where the sample sizes just aren't big enough yet for the kind of "quantitative study" that Robert hangs his hat on.

If a leader is looking only at quantitative data, he's looking at markets that have already moved on.

I believe that this is as true in other areas of study as it is for enterprise. And I'd propose that non quantitative research has become MORE important rather than less, regardless of who these people are that aren't giving out grants if you don't end up with a bar graph.

And part of the reason for this, I believe, is that regardless of what domain you're working in, the greatest opportunities, whether for research or for improving the human condition or for building a better virtual world are in the outliers, and in CHANGE - and because the speed of change is increasing, observational methodologies which are able to soundly provide insight into how change is manifested, how people react to and interact with the tools in which change is embedded (technology) are MORE important rather than less.

I have gone from being a fairly linear thinker to someone who has seen that the effect of virtual worlds and games and so on is to open up new patterns of collaboration, imagination and culture. While I find it useful to "prove" the value of these technologies with hard stats and figures, let's face it - that's just going to get someone in the door. The real work starts because this is all about change management.

I'd also like to differentiate a little about the types of research we're talking about. On the one hand we have Robert, who hopes to use virtual worlds specifically to test hypotheses that can be applied to the real world, and then there are others who believe that virtual worlds themselves are worthy of study in their own right, as their own cultures, and the extrapolation to the broader human condition doesn't necessarily happen because the study was designed that way - but rather that insight into human culture should always shed light on our broader sense of self and society.

I still find something flawed in the idea of designing studies in virtual worlds because you want to learn something about the real one - but I'll leave that to the quantitative or experimental accountants or whoever. I'm far more interested in understanding these cultures on their own merits and seeing if this sheds any deeper insight into who we are or where we're going.

But look - quantitative data is great, it's important, it helps to give us some good solid insight into some stuff that's really useful. If I were designing a virtual world I'd WANT to know whether having avatar creation in the world or on the Web is better.

But when you get into the domains of leadership, vision, change and the future I find that the people who are trying to ride these wild waves to the future are increasingly reliant on the types of insights that make sense of the fast-moving, often chaotic mess that is the intersection of culture with technology and that don't rely on a bar graph to provide those deeper visions.


I want to thank everyone who has shone some light into frankly what I think has emerged as a very dark corner within Robert’s imagination. There’s mot much more I have to add of any substance. I find it somewhat astonishing that the more “we” “qualitative” people try to broaden the discussion and continue to assert that the two should be viewed synergistically, and as some here have eloquently pointed out, address different types of research questions, the greater the heel-digging assertions that qualitative research is somehow inferior or doomed. Like Tom, I’m not particularly interested in partisan rhetoric. I’m not about to say one method is better than the other, or that one is doomed to extinction; I’ve used both in complementary ways that enhance my findings, and plan to continue doing so, regardless of what’s in or out of fashion. On a more pragmatic note, I’m not particularly concerned. Frankly, far from Robert's prognosis of a decline, interest in the type of research I do seems to be on the rise. My phone is ringing off the hook, people are e-mailing me right and left about grant and consulting opportunities, and I literally have giant corporations competing to include my lab as a partner on their government grant proposals. So if Robert’s so-called prediction is true, I don’t foresee it kicking in until long after I become “emeritus,” which is academese for “retired.”


First, let me clarify that (@Thomas) I am talking explicitly about the academy, and that (@Greg and Tom) by ‘prestige’ I mean the ability to publish in the outlets that lead to tenure for faculty in research institutions, the ability to have their research funded both internally and externally, and the ability for departments to house publication outlets and maintain hiring lines for their faculty. This isn’t entirely a zero-sum game, but there is a lot more variation in how the pie is split than in how big it is. I have assumed throughout our conversations that this is the setting we were focusing on, not whether what we do can translate into consulting gigs or publications in popular outlets.

When I look at this environment, here is what I see: Within many social-science disciplines, particularly those studying economics and psychology, people using mathematical models and statistical techniques look very full of pie, and those who don’t are mostly outside, pressing their noses at the window. When I look across disciplines—funding and hiring allocations between economics departments and anthropology departments—it looks like the same process in slow motion. Do things really look so different to the rest of you?

@Tom, I too hope is the start of a longer conversation, and that we can find a productive way to proceed. It clearly won’t be easy-- simply looking at the comments in this thread, we speak very different languages and are convinced by different types of arguments.

But I can envision one way to proceed. Somewhere in our discussions is an interesting set of research questions for each of us; questions that would identify relationships among research methods, research cultures, individual success, ‘prestige’ (however defined), philosophy of science, and all of the other threads we have been touching on.

I propose that we each describe a research program for publication in enough outlets sufficiently-highly regarded to allow tenure in some academic field. How would our questions differ? How would our methods differ? Obviously, we wouldn’t do the research—but we could describe a study that someone could do, and describe what we expect (given what we already know) as a likely outcome. For example, I would probably start with this:

Question: Within and across social sciences, are increased uses of math/stat methods associated with increased share of resources (journal space, tenure-track positions, etc.)?

Method: Code every published article as either math/stat or qualitative, and belonging to a researcher, department, university and discipline. Then test whether time-series and cross-sectional variation in resources allocated to researchers, departments, universities and disciplines is driven by a greater proportion of math/stat research.

Of course, we don’t do this only once, and this is where the real insights come. What would happen as we researchers, coming from totally different disciplines, actually read one another’s proposals using different methods, and then propose follow-up studies?

I can see a few possible outcomes. I would like to believe that our programs would create synergies that leave all research programs stronger, showing that we can all learn from one another, and by doing so demonstrate the benefits of this cross-discipline collaboration. An alternative outcome would be that we pick such different questions and methods that our research programs exhibit little or no synergy.

But it seems entirely possible that the programs affect one another asymmetrically. For example, what if I were to identify any testable claims in qualitative studies that would be amenable to math/stat methods, and propose ways to test them. Those tests either suggest that the assertions are wrong (imposing the discipline on qualitative methods that I originally implied in my closing remarks on Metanomics) or they suggest that the assertions are right (in which case qualitative work serves an important role of generating ideas, but the math/stat work ultimately gets the credit for demonstrating their validity).

I can only conjecture one side of the story. I don’t have enough background to guess how the qualitative researchers would react to the math/stat research program. Would you avoid making generalizable and testable assertions? Would you conduct in-depth interviews to provide richer context for why different variables are statistically associated? Would you criticize the unexamined assumptions underlying my models and methods, or refute my conclusions in some other way?


I'd like to think almost the opposite, that the dark moment the real world as a whole is in right now should show us finally the limits of models for describing the situated, lived experience of human beings as individuals and as social beings. And therefore not just descriptive limits, but prescriptive limits. Not that models and the concepts of experimental testability are without use, without power, but that their limits are not clearly understood, and beyond those limits, something much more qualitative and messy have to take over.

Deirdre McCloskey's long-standing critique of economics honed in on one aspect of this problem quite a while ago: that mainstream economics can identify quite well a statistically significant finding, but that arguing about whether that form of significance translates to general significance, whether that finding actually means something in the world, whether it's actually something that we can or should act upon, what it means in relation to other findings, can't be done within economics as it stands. Because that gap can only be jumped with something like a philosophy, with normative claims that are consciously argued for as normative claims (rather than assumed or dictated by fiat), with a fully worked-out view of what it means to be human and what it ought to mean to be human.

How many times do we have to go around the merry-go-round where a model or testably and usefully reductive hypothesis about human behavior jumps from being something to think about to being considered as a fully-realized empirical description of how people actually act and from there to being a concretized basis for organizing social and economic action and from there to being a major cause of a serious disaster or failure in social and economic institutions before we finally say, "Enough"? The global economic system is teetering because of abstract risk-calculation models that were released into the wild to mutate into risk-management practices without taking account of the messiness of actually-existing human systems and actually-existing individual psychology.

Every single time we come up against this kind of failure, we get told that it was the fault of the model, not the fault of the very idea of applying model-making to lived experience. It's as if one looked at the particle traces from a supercollider and claimed they were a perfect photographic reproduction of biological evolution on the planet Earth, and thus provided a map for the management of zebra mussel infestations in American waterways.

Models and simulationist experiments help us to understand the world we experience and think about the world we'd like to live in. They help us discover relationships between phenomenon that are not intuitive to our primate brains, and surprise us with hints of the causalities that our existing cultural and social narratives blind us to. But novels and films can accomplish some of the same purposes and so too ethnographies. If I wanted to understand the how, the why and the MEANING of what's happened on Wall Street recently, I might want an economist's models, but I'd also want Michael Lewis' book Liar's Poker, an attempt to represent how it felt and looked to be inside that world.

Insisting that it's just going to be models and testability and experiments from here on out is like insisting that if we cut off our ears, we'd solve the pesky issues of noise pollution and bad Top-40 pop music. Worse, if you want to try a testable proposition, how about this hypothesis: the hubris of this kind of tunnel-vision understanding of reductive social science is an important cause of repeated disasters in social policy over the last six decades in the U.S. and Western Europe. Until we learn to make decisions about what should be with a fully-fleshed appreciation of the messiness of human life and the never-resolved difficulties of questions around meaning, we'll keep blundering in that manner.


Sure this post wasn't a day early?


What he said....(@Timothy) - which was far better articulated than my version of same.


Adam H> Sure this post wasn't a day early?

Though this isn't an empirical or statistically relevant observation, this thread doesn't remind me of jesting so much as bear baiting. Shakespeare, that great qualitative explorer of the human condition, frequently had to compete for his share of markets and minds with bear-baiters.


Just to clarify, I don't mind being a baited bear now and then. And bear baiting, intentional or unintentional, makes for engaged entertainment (see the passionate comments). The greatest example on TN, I think, being the 8 pages of reader comments rejecting the claim that the Horde is Evil. We've got a different set of bears in this thread, though, given the more peculiar sort of bait that Robert provided.


I prefer to have my bear-baiting done in a more playful spirit, I guess. In other places and contexts, I've gotten to the point where I just don't even feel like it's worth it to trot out the usual roars and growls because I don't expect more noise than signal in those contexts. Game studies has mostly struck me as a field where this kind of discussion happens at a more sophisticated point where we're trying to balance out some divergent ideas about what a healthy synthesis methodology looks like. If we're on to digging and then shoveling dirt on methodological graves, I suppose that either means that there's enough perceived catchet in the field that it's become worth it to try and corner the market or that synthesis is too boringly sane and reasonable.


@Timothy, McCloskey's oomph test should be one more requirement when interpreting statistical results. @Dusan,Or as Muhammed Yunus put it in Banker to the Poor, "But when I emerged from the comfort of the classroom, I was faced with the reality of the city streets."

Real world companies spend billions on elaborate dashboard systems that purport to monitor the companies' health. Virtual world bots can scoop up more fine-grained data than any ten sims' worth of quant analysts could ever handle. It was not a shortage of quant analysis that got us into the current economic mess.

The virtual world research community is constructing a methodology that will eventually determine academic careers. It would be a shame for that process to privilege quant over qual methods, when qualitative insights might well help the virutal worlds survive and thrive.



I'm probably more inclined to trust quantitative measurements than some here. I can even go so far as to say that there are sets of questions about humans which will never be answerable through purely qualitative research (and vice versa, of course).

And...as a result, I'm pretty amenable to some of the points made in this post. The suggestion that quantitative research holds some sort of evolutionary advantage over qualitative research in the struggle for funding is not new, of course, but it is an important one. It may be a penetrating critique of the funding mechanism or of the academe as a whole. I'm struggling, however, to see it as a critique of qualitative analysis. We should remember that an evolutionary struggle rewards those best suited to the environment, not those most fit generally (if there is such a classification). The social sciences have faced considerable distortion in incentives over the course of the 20th century. driving new students and professors into mathematical and quantitative work. these distortions may range from the unfortunate (probably where I would place "physics envy") to the beneficial (the new avenues of research opened up by computers and numerical methods). We can imagine or recall other distorting effects, namely the those effects imparted through the selection of funding sources. In struggling to survive, both methods have adapted and waxed and waned in terms of their significance to their respective fields. Some fields have almost completely made the switch, moving to a Kuhnian 'normal' where quantitative question/answers make up the course of a "legitimate" (here in the eyes of the profession, of course) researcher's career. Economics is a good example, as is Political Science post Rochester-school. The paths taken by both of these disciplines have been distinct, but the direction (and the mechanism) has been the same. Professors are judged based on their publications and schools are judged based on the placement of their graduates. also, professors are judged (in a sense) on the fecundity of their PhD students, especially should those students cite the work of the professors--all of those mechanisms point toward a generational model of selection. I would also argue that none of them hinge upon the existence of virtual worlds.

I want to pick a specific thread up from the OP (haven't read all the responses yet). there is some sense that virtual worlds present some new source of data to be extracted. This is true, obviously. but the conclusion, that this new source of data will cement an advantage held by quantitative methods, is suspect.

Let's take SoE's data dump as an example. this is a font of information which may only be tapped through quantitative methods. There is simply too much information to be grokked through qualitative analysis. Parsing that data will require a skill set which doesn't correspond well to classical studies or cultural anthropology. those reseachers who work on it are going to need to write custom db search programs, create estimators for data (based presumably on the responses to the survey) and build some fairly sophisticated models to test those estimators. But the ability to pick and choose among data and select from a list of possible questions to answer doesn't come from that skill set. This is DIFFERENT from saying that those who possess that skill set are unable to generate novel or important questions (just as, obviously, I don't mean to say that someone who can quote Durkheim from memory can't write regular expressions). Meaningful insight about behavior doesn't present itself when given enough data, spurious correlations do.

Importantly, it would be impossible to generate meaningful hypothesis about EQII (even given TBs of data) without gathering some assumptions about player behavior. those assumptions set in the researcher's mind through years of game theory and mathematics will probably miss the boat. Following up on Tim's commentary, I don't just want to read Liar's Poker, I want to read Stigum's The Money Market before I do quant research in finance. Qualitative methods prepare us to understand mechanisms, choices and motivations in a way that quant methods cannot.

I'm probably recapitulating much of what was already said at the conference, but I get a free pass, because I wasn't there. :)


As a note, it was Warren Stanley Jevons (one of the 'fathers' of neoclassical economics) who said that the business cycle would be predictable given sufficient data back in the 19th century. Keep in mind that he also suspected sunspots were linked to the business cycle. :) I figured The Use of Knowledge in Society would have dismissed that aspiration pretty summarily.

Quantitative methods don't suffer from a lack of experimentation of data, but from the core problems of indeterminacy and endogeneity. This is especially so in econometrics. We cannot observe a counterfactual and we cannot eliminate the question of self-selection. Theoretical work in econometrics treats these two problems seriously and attempts to work around them, but it also attempts to probe the limits of that quantitative work. We will find those limits and the questions which arise outside those limits will have to be probed through qualitative work.


@Adam, it is nice to have someone provide a few words of support! I totally agree that the tendency for fields to 'turn quant' is not a good thing; it is just a very strong tendency I have observed as fields get more and more access to data. It may be that the complexity of online communities will slow or stop this process, but given what has happened in economics (also very complex), I am not so sure.

I think the following quote of yours is the way I would like to answer @Timothy Burke on how we can start a synthesis methodology.

"Qualitative methods prepare us to understand mechanisms, choices and motivations in a way that quant methods cannot."

My hope was that my fellow panelists would respond:

"Yes! We could actually work together that way. We will dig into the messiness of contingent circumstances, and in addition to publishing the type of books and articles we do, we can also help you understand our conclusions and theories. We can work with you to translate them into hypotheses that you can use your methods to test in the way you are trained to do. We both learn and grow from this process."

But instead, this type of proposal appears to be read as the insulting claim that qualitative* methods are just the scaffolding we need in place before we can get into the "real" research. And then the discussion ends up focusing on the relative strengths and weaknesses of the two approaches, which is interesting but not particularly productive.

I cannot easily see another way we can work together. Maybe the other commenters have suggested an alternative form of collaboration, but I am too unfamiliar with their language and forms of argument that I am just missing it.

Until we resolve this specific point of contention, or find another form of synthesis, I don't see any better outcome than Tom's suggestion that we allow many methods to flourish--settling for parallel play instead of actual collaboration.

*My original comment said 'quantitative' here. Whoops!


I am sorry to say, Robert, that *on the whole* such collaborations, when they fail (and in my opinion they do not nearly as often as you seem to expect), fail not because of a lack of epistemological openness on the part of exploratory or qualitative researchers. Instead -- and this entire episode has, and to my chagrin continues to be, a perfect example of this --, they fail because of the overweening pride of researchers like yourself, who instead of truly conceding graciously and showing some awareness of their own perhaps problematic assumptions, continue (as in this latest example) to find ways to blame others.

It may be arrogant to say, but I can nonetheless claim confidently that you who have cast yourself as the scold of the qualitative social sciences were in fact schooled on Monday. Not only the panelists but also many of the members of the audience zeroed in on how shallow your understanding of the philosophy of science (and all empirical inquiry) was, as well as objecting to your blithe characterizations of the broad contempt in which the qualitative social sciences are held.

I see no evidence of humility, of doubt, of the characteristics of a true scholar, here. No amount of seemingly cheerful but subtly unmoved rejoinders from your end obscure the fact that you assume you hold the higher ground. What you don't seem to understand is that everyone else involved is aiming for a level playing field. Again, I am sorry to say that nothing about this bear-baiting post and your subsequent comments have led me to be any more sanguine about your willingness to accord all the various approaches to social science the respect they deserve. In that light, your sense of the impossibility of such collaborative work may be read less as a reliable picture of the state of things, and more as a result of your own experience.


This sounds like a good thing to me!


Robert, consider this. Developers of virtual worlds have even more access to the kind of quantitative data that you think intrinsically turns disciplines into quantitative disciplines. Researchers have to beg for some of that data, or find ways to scrape and mine it from publically available informational streams.

Access to exceptionally rich amounts of quantitative data about player practices, social identities, times and forms of play, has not helped many live management teams accomplish what you would think would be the simplest of all tasks possible using that data: secure retention rates, satisfy players, make a commercial virtual world which reliably returns long-term profits to its developer. I know of at least two MMOs that made extensive use of quantitative data on player behavior which failed rather spectacularly to meet commercial expectations.

You could suggest that this is just a failure of execution--after all, there is no guarantee that a quantitative researcher in the social sciences will be correct in their interpretation of data.I'd suggest, however, that developers who do not *also* have a sophisticated qualitative and aesthetic understanding of their own product are going to fail no matter how accurate their interpretation of the quantitative data they have.

Without a qualitative approach to knowing a virtual world, a developer is not going to know what the quantitative data really means. They're going to miss important dimensions of behavior that don't show up in the numbers: how players speak to each other, what players imagine themselves to be doing, what players find to be fun, etcetera: everything that goes into constituting culture in a given world. I might know exactly how many male players have female characters, and maybe I'd have some kind of survey data about their self-reported reasons for having female characters, correlated against a lot of other information I have about them. But that will get me only part-way to understanding what this means, and helping me to understanding why players themselves talk about this aspect of play in a virtual world almost incessantly.

If I don't have an aesthetic understanding of my virtual world as a developer, I won't know why people like some parts of the world and don't like others--and I certainly won't be able to respond usefully if I lack an aesthetic understanding. EQ2 when it first went live struck me as a game without a controlling aesthetic idea: character models that were almost purely fantasy conventions, using generic styles of computer-mediated visual representation. I think a lot of people see that as one reason why that game struggled a lot.

A company like Blizzard can't answer the question, "Should we spend time making an instance that's too challenging for most players" in a purely quantitative sense. They can't answer, "What draws people to our game and keeps them here" in that way. They can't answer, "Will a subscriber today be a subscriber tomorrow?" They can't even say for sure, "Why are those players in that place rather than this place?" with absolute certainty. World of Warcraft wouldn't be the success it is without a confident aesthetic sensibility, without a real sense of style in its visuals and its narrative. That doesn't come from quantitative data.

So the people sitting on top of the biggest pile of data of all fail at understanding a social and cultural domain that they have enormous power over if all they do is scrape numbers. They equally fail, of course, if all they do is try for an embedded, ethnographic or qualitative sense of what's happening in their worlds.

I have no idea why you think that being quantitatively data-rich inevitably drives any field of knowledge towards nothing more than that kind of data. The best live management practices don't reflect that kind of inevitability, so why should academia?


Timothy Burke says:

"The best live management practices don't reflect that kind of inevitability, so why should academia?"

I think the answer lies in the very different goals of Blizzard and academics. Blizzard is going to devote its research resources in ways that they believe will help them make better decisions.

In contrast, academics like to surround themselves with people they can collaborate with. Academics prefer to take jobs where they have a good research fit with the department, and the departments feel likewise. Deans may be a bit pesky in demanding diversity in thought, and coverage of a broad range of courses (which means a broad range of faculty), but they are a far less powerful a counter to this positive feedback loop than Blizzard's decision-focused resource allocation would be.

There is also a fair bit of faculty movement across universities with similar levels of prestige--lots of people go from Harvard to Stanford or vice versa, but not so many from Harvard to Noname State or vice versa. In my field of accounting, this has led to the best schools being almost exclusively populated by econometric researchers, and experimental researchers concentrated in the second-tier.

Now it isn't entirely clear why quant methods won the day in business/econ and psychology. Perhaps anthropology will go the other way. But the 'can't we all just get along' argument doesn't work tremendously well in academia, since there is such a strong force toward hiring people who fit within a departments existing paradigm.



Well, in an effort to patch up this thread, can we all agree that what is going on here seems to boil down to something *unfortunate* -- maybe either mutual misunderstanding, mutual distrust, or mutual disrespect between two (perceived) academic factions?

Let me be a little more charitable toward the OP than I was. It seemed a lot like bait to me, but if Robert is despairing over the lost prospect of genuine collaboration, then I take it that he *wasn't* trying to provoke ethnographers for entertainment purpose.

This is about a misunderstanding, then.

Let me try to iron it out.

As Tim noted, the zero-sum game that Robert suggests is playing out has been playing out in both universities and other public institutions for many, many years. As Tim also notes, in addition to being close to those who work in universities, it has played out in broader politics. It's an old epistemological question not unique to virtual worlds.

Robert's concerns about "qualitative" thought, and the comments of Chip & Randy in a previous thread to the effect that the humanities are weak find a distant ancestor in Aristophanes, who poked fun at the Sophists, their research interests and their social institutions. Doubts about proper methods of the pursuit of knowledge are very old, even without getting into modernism and positivism and anti-anti-anecdotalism, etc. Though I didn't tune into the discussion and I haven't read the transcript, I'm sure that Celia, Tom, & Thomas said as much.

But still, this whole thing persists. A while back, Nate pointed out that in Eve, the farmer and the cowman can be friends. That's true in this context too. All it takes is a little bit of respect and humility.

That respect exists. Many people in the sciences love the humanities, many people in the humanities love the sciences, and many people manage to contribute and collaborate in ways that cross boundaries. That is the way it should be. I may wear rose-colored glasses, but I find the university and the interdisciplinary activity within it truly wonderful, especially today.

Robert's impression that there are two camps is partly true too, though, just given institutional arrangements and tensions over, as he calls it, a limited pie. I personally am not fascinated by the pie, but I do care about how people get along. I've found that a lack of mutual respect and the sting of injured pride, on both sides, often creates the sense of camp and conflict.

E.g., the philosopher says to the scientist, "I don't do the kind of thing you do, but I really value Z, and sometimes some of the things you do actually help me in understanding Z. However, many other things you do really show that you don't really understand Z. You should think about Z more."

The scientist says to the philosopher, "I don't really understand what you're doing, but I use methods A and B to get at C, and knowing C actually makes things work better in the world. Perhaps if you wanted to make a real contribution to productive knowledge, you could lend whatever specialized knowledge you actually have to looking at A, B, and C."

The philosopher replies, "If you really want to get at C, you need to learn about Z, because Z is more important than C. That's why I study it."

The scientist replies, "Z? Are you serious? You should care about C. Z is only useful to the extent it gets at C."

The philosopher thinks: OMG, scientists think philosophy is all worthless word salad.

The scientist think: OMG, the philosophers think we're philistines.

Injured pride on both sides. Scientists share their frustration with each other and philosophers close ranks.

It doesn't really play out this way most of the time in my opinion, but some of this happens fairly often and it's rather sad when it does.

But of course, it *really* doesn't have to be that way. Genuine collaboration does occur and it occurs here on Terra Nova regularly. It's wonderful when it does occur. But for that to happen, there needs to be respect and humility, the recognition that in order to be worthwhile, a discipline does not need to completely abandon its methodology.

So Robert, that's why I presumed you understood that when you declared the sound of a "Death Knell for Qualitative Methods," you were really just looking to bait bears. If you really *are* hoping for greater collaboration, you need to be very careful about the messages you're sending by your choice of words.

At the same time, I'm happy to pass around the blame. Some folks in the humanities (I'm looking at ME here) can be pretty prideful and dismissive at times, and if any of your despair over collaboration was motivated by traumatic experiences with folks in the humanities camp (as it seems was the case with Chip), I hope this thread is not making matters worse because of the heat it has generated.


I posted before reading Robert's last comments. Just two more things.

1) I think there's a pretty good reason why quantitative methods will not "win the day" in cultural anthropology but stand a better chance of doing so in a business school.

2) As to the matter of whether a faculty member would transfer from Harvard to Noname State --the real problem with that, as I see it, is that you can't teach a Sneetch.


Very well said, Greg.

I apologize for being somewhat insensitive in my choice of words, and for underestimating how easy it is for people to assume that a prediction is a value judgment. (One more time, I am not saying ‘the death of qualitative research’ would be a good thing, or that it would demonstrate the superiority of quant methods, just that there are strong forces for it. I also understand the arguments against the accuracy of the prediction.)

I think if you look at the http://metanomics.net/archive030209> original interview with Tom and Celia, you will see that I take qualitative work seriously and respectfully. Things went awry when I opined that some of the ideas I learned were so interesting that some enterprising scholars should use virtual worlds as an opportunity to apply experimental methods to test them.

I guess I hoped people would respond with ‘you know, here is something I learned when I conducted an indepth study of X; you might be interested in testing whether this particular result would generalize.’ Or more likely, hey, you know there are already some people doing that; have you seen the work by so-and-so?’ Or perhaps ‘knock yourself out, but I don’t really find that type of work very interesting.’ But instead, my comments are taken as being entirely pejorative—a scolding—as if somehow I declaimed the worthlessness of one field and the perfection--of another. That certainly wasn’t my intent, as I think that would be a silly sentiment to express.

For me, this entire thread would be worthwhile if I could just get two comments along the lines of what I originally hoped for: one or two specific suggestions of ideas from qualitative research that I might find it interesting to test experimentally. I end up with chance for a new research program, the researchers who inspired the study get citations and exposure to researchers in other fields, and just maybe they even learn something from the work I do inspired by their prior investigations.


Rob, look, just on your predictive arguments, that whether or not you approve, qualitative research is on the chopping block any time a field gets a hold of new data, can I just suggest that your account of the intellectual and institutional history of academic disciplines and academic units needs some work? I mean, you're starting your generalization from within accounting and business. Even there, take a gander over in the direction of marketing, advertising and consumer studies, where qualitative work has held its own for a really long time in the face of well-established quantitative methodologies. I just don't think you understand the history of anthropology as a discipline if you think it's just the next domino to fall (again, whether or not you approve of that happening). In economics, arguably, some of the most recent movement is away from modelling and towards behavioral economics, which uses some qualitative methods. Political science is probably the discipline which most closely coheres to your description of a move towards quantitative and experimental work, but even that is a very nuanced and complicated transformation.

You're writing out a massive range of academic institutions as well with your focus on one very exclusive type of R1 university in your view of what's happening in disciplines as a whole. A discipline as it exists in seven or eight institutions is not a discipline as it exists across the whole of academia.

On your last paragraph, you just don't seem interested in the epistemological objections that people have to your demand for "qualitative research that you could test experimentally". There doesn't seem much point to suggesting that you take a look at work that you aren't willing to take seriously in terms of its own epistemology--no more than I'd recommend to travellers on the road that they sit back and relax in Procrustes' bed.

A modest suggestion: consider an "experiment" where a virtual world starts with forty servers that each have one minor variation in the initial ruleset. Not because you're trying to test some single-variable hypothesis about player behavior, but because you're interested in how the long-term culture of a given world will be different descriptively and qualitatively given different initial conditions or underlying structures. It's likely that after a sufficiently long time, you might have a working argument about the likely or probable consequences of a particular initial condition on subsequent social formations. If you're interested in working through that argument in qualitative terms, however, the point is to work to that conclusion post-facto. Obviously, you have to have some hypothesis or intuition to start in order to know which condition or rule you want to vary, but you want to avoid truncating your "experiment" in order to reach a highly constrained, reductionist test of that hypothesis.

This is not a view of experiment that crawls up out of some humanistic sub-basement, either. It's pretty much the same argument about social science and experimentation that some complex-systems researchers make, including simulationists with an interest in artificial societies.


The point in Tim's last two paragraphs is critical (and was partially what I was trying to articulate). I find that most good quantitative work I read proceeds from penetrating intuition about the nature of a particular system. Because the language of the discipline (at least in economics) privileges quantitative work and (largely) positivism, the result may be that what is essentially qualitative work drives the insight and quantitative work (or, more precisely, formal modeling) is added ex-post to avoid stricture.

Also, I would suggest that contra Animal Spirits, most of the behavioral work in economics is not a move away from quantitative analysis and modeling so much as it is a move away from a baseline assumption of rationality in actors. And there are still huge sections of that discipline which avoid behavioral models for a number of reasons (ranging from the defensible to the indefensible)


Tim, your proposal is an interesting one. The controlled manipulations could be very simple and focused --for example, a single change in how 'friending' is mediated--but still have incredibly complex, long-range effects.

The simplicity of the manipulations would allow people like to me conduct the type of comparisons I am trained to make--statistical comparisons of simple measures to test a simple hypothesis. At the same time, qualitative researchers could explore the vast complexity of the worlds, and draw their own insights.

The question then becomes: what set of manipulations would be of interest to both quantitative and qualitative researchers. Do you believe that the epistemological differences are so profound that we couldn't find any? That would be a big surprise to me.

Also, @Timothy, I am glad to take qualitative research "seriously in terms of its own epistemology." But what is wrong with ALSO extracting from that work what I can to help me do my own type of work better, using the epistemological framework on which my methods are based? I am also glad to acknowledge the shortcomings of my own methods and the epistemology on which it is based. But those shortcomings mean that my methods are limited, not that they are useless.


By the way, the transcript is now available--see the update to the original post. We haven't cleaned it up yet, though, so there are some misspellings, etc. Reader beware!


@robert and myself

Haha, here I was telling you about behavioral economics and quant research...just looked at your web-page. {sheepish}

As for collaboration....I don't know. I think that the differences run deeper than the decision to place 'preferences' on the left or the right hand side of the equation. The value judgments associated with training in any normal science tend to be exclusive. I'll pick something of a pedestrian example. My Labor Econ. course spent a few days focused on workplace decision differences between France and the U.S. over the last few decades. Differences in wage, hours works, employment protection, rates of unionization were all discussed. We talked about possible causes for the differences and historical roots for those proximate causes and I (in my inexperience) raised my hand and asked how much of the workplace habit differences between the two countries can be chalked up to culture. Predictably, the professor (who I like very much) responded as if I had asked if the moon was made from cheese. The entire avenue of thinking stemming from cultural differences as ultimate (or even proximate) causes was rejected as juvenile. Or..perhaps..as backwards (back to the LHS/RHS question). Cultural differences were the fruit of different earnings profiles and labor market policies, not the impetus for those differences. This is galling to anyone who is convinced that the flow of causation operates the other way, or is at least convinced that feedback occurs.

Even more maligned in this split than the cultural anthropologists are the English PhD's. We can have a chuckle at the XKCD comic on the subject, but it is awfully hard to marry textual interpretation with quantitative research. This is doubly so when some famous "quant" forensic analysis of literature is borderline fraudulent, e.g. SHAXICON.

This parochialism isn't necessarily one way, of course.


@Greg (if you are still reading this thread), I know that there are a growing number of legal scholars who are using statistical methods, which is quite a break from the usual form of legal scholarship. How is this new group being accepted? Are they growing, was it just a fad, have they been able to establish credibility?

@Adam, I am not at ALL surprised that an economist would scoff at the effect of culture on....well, what I would tend to call 'workplace culture.' No wonder the anthropologists are so ready to assuming I am not taking them seriously!

We have similar issues in accounting, looking at the tremendous divergence of accounting practices across countries. It is the very rare discussion that takes cultural differences seriously--unless you view the common law/civil law distinction "cultural."


It seems to me that there is one kind of synthesis we can aim for in the production of social knowledge that argues for a division of epistemological and methodological labor, that every blind man has his part of the elephant to feel and describe. Even in such a division, we might acknowledge that there are areas where separate methodologies converge on one part of the elephant and become rivalrous, but then it strikes me that there are smart and also highly bounded or limited kinds of discussions to be had about those contentious or overlapping domains. Otherwise, we'd talk about methodological pluralism as a good thing--which means in part saying, "If the question we aspire to ask is this kind of question, use this kind of method. If it's another kind of question, use another. If you're not personally comfortable with the method indicated, leave the problem to someone else".

The approach you're asking for is rather more like the way I read E.O. Wilson in Consilience: he desperately seeks a synthesis knowledge, but entirely on his terms. So looking at literary criticism, he more or less says, "Well, literature itself is interesting for what it tells us about human neurobiology and communication, and we should study literature as evidence about neurobiology, communication and so on." E.g., he's only interested in literary criticism at the moments where he can make use of it in what he sees as the only legitimate methodology and epistemology; the rest isn't doing anything that he think matters or belongs in a synthesis form of knowledge.

I think that's the problem with being too quick to strip off the epistemological surroundings of another discipline's mode of inquiry in order to make use of something you find there within your own preferred method. It basically implies that the other epistemology wasn't necessary or required to generate that knowledge, and that the work is reducible to the single finding or proposition that you can make use of. It's ok if you say, "Look, I'm just trawling through qualitative work to find interesting new ideas", e.g., this is just about your own creative methods. Maybe it's even ok if you argue that this is about the limits of your own methodological toolkit, that turning to qualitative work is an acknowledgement that you've hit a boundary and you're having to tentatively reach beyond it. But if that's so, you need to respect whatever it is that allows those methods to produce something different. I know I see very firm boundaries to what qualitative methods can know, and very crucial questions that lie beyond those boundaries. If I personally wanted to deal with those questions, I'd feel obligated to step however tentatively I could into terra incognita, and try to work with some unfamiliar tools and methods.

When you start expecting qualitative methods to generate results in a form that you can then readily use in a different methodological and epistemological practice, you're basically saying that they don't have any epistemological integrity of their own that you're bound to respect: that all that matters is whether they produce something you can strip out of them and use successfully in a different kind of practice.


/agrees with Tim

And my perception is that this problem -- I guess, borrowing from Adam, we might be called epistemological parochialism? -- is not uncommon in the academy. But I'm enough of a pluralist or a pragmatist that I'm willing to live pretty cheerfully with it as long as folks are civil with their methodological biases. At that same time, as I said earlier, I think the only way to grasp the truth is to appreciate the whole elephant.

@Robert, legal scholarship is very eclectic. Law schools began as trade schools and law professors don't have a "law PhD" experience that establishes some of the disciplinary solidarity found in other areas. But at the same time, most faculties I know are highly collegial. In fact, recent trends in law school hiring are toward finding folks with PhDs in, e.g., economics, philosophy, history, etc. And we can actually get along. Re whether law even fits in the humanities sphere, Balkin & Levinson have some thoughts on the question here.


Great link, Greg--thanks!

@Timothy, love the characterization of E.O. Wilson. I think I am much more in the trawling camp. I don't want to imply that I am "expecting qualitative methods to generate results in a form that you can then readily use in a different methodological and epistemological practice." But from what I have read of this work, it sure seems pretty easy to pull out such results. I don't view that as the basis for a value judgment--the work should be judged on its own terms.

I'd also like to point out that your '40 servers' research proposal is yet another form of collaboration, in which researchers from different traditions come together to create a setting that all are likely to find interesting, as they analyze it in their different ways.

As a final thought, comment threads like this can make disagreements seem more severe than they actually are. As always, xkcd makes the point gracefully. .


I prefer Penny Arcade's take on it. [NSFW]


Can we all shout out either "tastes great" or "less filling"?

So, two cents from a guy with quant data:

a) I don't think that any one approach is going to dominate, nor should it. The interest in quant data is just the shock of the new. We've had a couple of decades of almost entirely qual approaches in online worlds and now here's this new approach. Well, it's not going to take over, but it is going to gain interest and it may take some of the funding pie.

b) The quant data is useless without knowledge of the context that generated it. I think I can absolutely answer a lot of the questions that Tim thinks I can't with these kinds data, but there's no doubt that they don't answer everything, or with 100% accuracy.

The real gains are when we can combine both approaches, imo. We get the context and depth that qual approaches are good at and the breadth and generalizability that quant approaches are good at. Both can do a little of the other, but I figure why not play to their strengths?

Adam wrote: "Let's take SoE's data dump as an example. this is a font of information which may only be tapped through quantitative methods. There is simply too much information to be grokked through qualitative analysis. Parsing that data will require a skill set which doesn't correspond well to classical studies or cultural anthropology."

Actually, I could use some more anthropologists working on the data. I can chart, predict and analyze a lot of stuff, but without "local" knowledge, I can't always make sense of it. Sometimes I need the peanut butter to go with my chocolate.

So what I'm really saying is that this whole thread needs some Reese's Peanut Butter Cups. :)


Yum. :)

On economists and prediction, by Harvey C. Mansfield.


@Thomas....you know how when you learn a new word, it seems to pop up everywhere. Your link is one of several just over the last week railing against psuedo-objectivity (or the cult of objectivity) in economics, and even blaming such research and researchers for the financial crisis. This great timing shows, first, that the financial pundits must be watching Metanomics , and second, that I have some extra work to do on this topic for my day job.

Go here to see my first step toward starting the discussion in my own research community.


Robert, could you please point to any quantitative research you or your students have done inworld in Second Life?

Metanomics in fact is qualitative research.


I see Metanomics as journalism rather than qualitative research. But it does seem that the methods of the two disciplines have some overlap.

Metanomics has given me a number of ideas for quantitative research. A few of those ideas seem strong enough that they are nearing the top of my to-do list. But it will be a while....


Thanks for sharing this information. I found it very informative as I have been researching a lot lately on practical matters such as you talk about...
legal steroids


How many times do we have to go around the merry-go-round where a model or testably and usefully reductive hypothesis about human behavior jumps from being something to think about to being considered as a fully-realized empirical description of how people actually act and from there to being a concretized basis for organizing social and economic action and from there to being a major cause of a serious disaster or failure in social and economic institutions before we finally say, "Enough"? The global economic system is teetering because of abstract risk-calculation models that were released into the wild to mutate into risk-management practices without taking account of the messiness of actually-existing human systems and actually-existing individual psychology.


I'd like to think almost the opposite, that the dark moment the real world as a whole is in right now should show us finally the limits of models for describing the situated, lived experience of human beings as individuals and as social beings. And therefore not just descriptive limits, but prescriptive limits. Not that models and the concepts of experimental testability are without use, without power, but that their limits are not clearly understood, and beyond those limits, something much more qualitative and messy have to take over.



Mr. Bloomfield, come to the dark side. Anthropology. Call me some time if you want to know why.


delete that extra space, please, Mr. Universe.

The comments to this entry are closed.