Thursday

Caveat lector

Update: Here is a link to an interesting article in the Chronicle of Higher Education about the same topic, "Fraud Scandal Fuels Debate Over Practices of Social Psychology."

This is a follow-up to my post earlier this week about the increase in retractions of articles by scientific journals - today's New York Times carries a story about a Dutch psychologist, Diederik Stapel, who appears to have faked a lot of work, including data and entire experiments. There's concern about the students who obtained PhDs under his guidance as well. The Retraction Watch blog has been covering the report the Times mentions, and it all makes pretty discomfiting reading.

As the Times article goes on to discuss, there is a bigger story, especially important for people who use social science data:

The scandal, involving about a decade of work, is the latest in a string of embarrassments in a field that critics and statisticians say badly needs to overhaul how it treats research results. In recent years, psychologists have reported a raft of findings on race biases, brain imaging and even extrasensory perception that have not stood up to scrutiny. Outright fraud may be rare, these experts say, but they contend that Dr. Stapel took advantage of a system that allows researchers to operate in near secrecy and massage data to find what they want to find, without much fear of being challenged.

“The big problem is that the culture is such that researchers spin their work in a way that tells a prettier story than what they really found,” said Jonathan Schooler, a psychologist at the University of California, Santa Barbara. “It’s almost like everyone is on steriods, and to compete you have to take steroids as well.”
. . . 
In a survey of more than 2,000 American psychologists scheduled to be published this year, Leslie John of Harvard Business School and two colleagues found that 70 percent had acknowledged, anonymously, to cutting some corners in reporting data. About a third said they had reported an unexpected finding as predicted from the start, and about 1 percent admitted to falsifying data.

Also common is a self-serving statistical sloppiness. In an analysis published this year, [Jelte] Wicherts and Marjan Bakker, also at the University of Amsterdam, searched a random sample of 281 psychology papers for statistical errors. They found that about half of the papers in high-end journals contained some statistical error, and that about 15 percent of all papers had at least one error that changed a reported finding — almost always in opposition to the authors’ hypothesis.

You can find a copy of the Wicherts and Bakker paper here, in English. The Leslie John paper is not yet published, but I will keep an eye out for it.

No comments:

Popular Posts