After reading Max Bazerman’s Inside an Academic Scandal, Rick Hess came away wondering, What if Social Science is a scam?
I couldn’t help but think his faith is misplaced. To start with, many of the studies he references in the course of the book strike me as unnecessary or simply pointless. A (hugely incomplete) list of the published studies includes those that examine whether counterfeit products make people feel insecure; whether increasing one’s “perceived” height, such as by riding an escalator, leads to more altruistic behavior; whether networking leads people to think of words related to cleanliness; whether messy workplaces are more productive; whether commercials with skinny models are less effective than those with other models; and whether people thinking about death eat more candy. These aren’t studies Bazerman’s spotlighting but rather a sampling of the scholarly research he touches upon in the course of his narrative. It’s telling that he seems to see such studies as unexceptional.
To my jaded eye, such research seems less like “science” and more like “academics amusing themselves in polite company.” Indeed, Bazerman relates an almost too-perfect illustration of this dynamic. A doctoral student whose thesis included an extended critique of Gino’s networking/cleanliness study (which was also later found to be fraudulent) was advised by a member of her dissertation committee to delete the section. Why? Because “academic research is like a conversation at a cocktail party,” and her critique would be seen as rude and inappropriate. However inane we might find the research question, remember that Gino’s study was considered “real” social science, published by an esteemed scholar in a prestigious academic journal. And I haven’t even touched on the faddish, data-free, critical-theory argle-bargle that constitutes such a big chunk of academic publishing.
I’m left wondering how many research studies are just a playground for a privileged caste of credentialed scribblers to amuse themselves and build comfortable careers, all with the aid of hefty public subsidies. Scholars certainly don’t think so. They tell us research is a dynamic endeavor and we have to trust that these explorations are how we surface unexpected, important truths. But should we actually buy that? I’m inclined to think that William Proxmire had a point with his “Golden Fleece” awards, and that we’re way overdue for a serious conversation about the kinds of research that merit public support.
Bazerman laments that even the universities don’t seem to take research outcomes all that seriously. It’s hard to when you prioritize PR and legal considerations over transparency. For instance, when (ethics scholar!) Ariely’s fraud came to light, Duke University’s only response was to quietly have him complete an eight-week professional ethics course. (Of course, Duke itself had recently been fined $112 million for using falsified data to win $200 million in federal funding.)
I know I sound like a broken record, but it’s hard to ignore the opportunity cost of all this. Gino, for instance, published more than 130 papers between 2007 and 2022—of which dozens appeared to be plagued by falsification and misconduct. Meanwhile, Bazerman recounts, “Gino made little time to meet with doctoral students, often failed to show up for meetings, canceled meetings at the last minute, and sometimes called [her colleague] Julia at the last minute to ask Julia to cover her teaching obligations.”
What exactly was this Harvard professor (and fount of falsified research) doing instead of teaching or mentoring? Bazerman explains that the “division of labor” meant that junior members of her team “directed the work and mentored students, while Gino offered occasional input, paid the bills, and used her resources and connections to promote the work.”
Not only does all this raise major questions about the utility of social science research, it also casts serious doubt on its reliability. Bazerman describes another of this century’s more infamous academic scandals, which unfolded a decade ago in the Netherlands when hotshot Tilburg University social psychologist Diederik Stapel churned out scores of papers with doctored or fabricated data. Stapel had a hypothesis: that looking at pictures of an attractive person would affect self-image negatively. (Why this needed to be researched at all, much less by a publicly subsidized scholar rather than a bored marketing intern at Estée Lauder, isn’t clear to me.) In any event, Stapel was sure he was right, “but the actual data didn’t support it.” Consequently, Bazerman relates, “Stapel sat at his kitchen table and began typing numbers into his computer that would produce the intended effect.” His study was published in the prominent Journal of Personality and Social Psychology in 2004. Before being discovered, Stapel committed fraud in at least 55 papers, and his fictional data was used in ten PhD dissertations.
For Bazerman, Stapel’s folly is a terrible abuse of science. I agree. But, even if Stapel’s numbers had supported his hypothesis, I wouldn’t be all that impressed. I wouldn’t have come away convinced that Stapel surfaced some important, fundamental truth about human nature. More likely, I’d have thought it was a silly question and wondered about the soundness of his research design.
Now, I don’t mean this as some kind of anti-research screed. There are, of course, purposeful, comprehensive, data-conscious research enterprises that are attempting to answer questions of pressing social import. (This is the kind of scholarship that we celebrate at EdNext.) But, in Bazerman’s description of Stapel, I couldn’t help but think of all the thousands and thousands of social scientists who spend hours each day hunched over laptops playing with data files that they didn’t collect, don’t fully understand, and frequently take on faith. They don’t know exactly how the data was obtained, the vagaries of the collection, or how sturdy it is. How confident can we be in the results that get spit out, even when they’re “statistically significant”? I’d argue: A lot less than we typically are.
And it’s not like the researchers invested in these projects are scrupulously asking, “Is this true?” Rather, as Bazerman notes, the incentives to pump out papers or make a splash can lead to all manner of shortcuts. He points out that even esteemed scholars rarely review their co-authors’ data, because division of labor is a recipe for speed. They delegate much of the data collection to doctoral students because that helps move things along. This blind faith in data files is baked into the academic formula for grants, jobs, influence, and professional success (whether or not the results can be trusted).
In my experience at a major research university, perhaps a many as a fourth, or more, faculty engaged in dubious practices. I had a dean who actively tried to cover up two cases of research fraud (one with potentially dangerous consequences). The department’s senior faculty was able to force disciplinary action in one case (dismissal) but not the other. A third, who escaped punishment, was senior, and had a couple of hundred bogus publications and many plagiarized publications. His shinanigans were well known, and even documented, but no college administrator would take him on.
This was at an engineering college.
I recently discussed this phenomenon with a good friend of mine and discovered we have developed the same practice to insulate ourselves against it.
We assume anyone, in any position of authority, obtained by way of credentials, is a fraud until they demonstrate competence. The system is so broken that any thoughtful consideration obligates us to assume a tacit position of skepticism.
The system isn’t failing to signal, it’s inverting the signal it was supposed to send.
It’s not like anything visible may use such odd ideas as scaffolds or buttresses and create demand for them. The great enigma!
Phileas Frogg says:
The mandarinate design is inherently poisonous, because being given power through wearing the Ring (or even certificate for an appropriate Ring) eventually turns people into Ringwraiths. Just like theocracy kills faith, turning academia into a leg of the throne was bound to kill science institutions. «Oppression and the sword slay fast, thy breath kills slowly, but at last», indeed.
While the specific theology is saturated with a memetic immunodepressant: good old Inner Light, 1.00 per two legged creature with flat nails and no feathers.
How could this have any other result?