The cover story in the most recent Economist turns a critical eye to the sloppy standards of contemporary scientific scholarship. “Modern scientists are doing too much trusting and not enough verifying,” the article states, “to the detriment of the whole of science and of humanity.”
One problem is increasingly more experiments are not replicable. Researchers at a biotech firm called Amgen found last year that only 53 “landmark” cancer studies were replicable. The authors state that in the quest for tenure and career advancement, “replication does little to advance a researcher’s career.” Another problem is fudging the data. One in three researchers apparently know of a colleague who has misrepresented her results. And yet another problem is with the peer review process itself, which often fails to catch critical errors. We’ve long known about the problems with peer review in the humanities and social sciences (which made Alan Sokal famous) but the article notes that “when a prominent medical journal ran research past other experts in the field, it found that most of the reviewers failed to spot mistakes it had deliberately inserted into papers, even after being told they were tested.” Two weeks ago, the Economist featured a repetition of the Sokal experiment in the “hard” sciences. John Bohannon, a biologist at Harvard, submitted a fabricated article “bursting with clangers in experimental design, analysis and interpretation of results” to 304 peer-reviewed journals. The study was rejected by only 98 and was accepted by 157, with 49 not responding. This coupled with the enormous number of scholarly articles being published leads to an inadequate process for judging the veracity of data and the quality of scholarship.
The culture of fabrication, misrepresentation, exaggeration, and just plain shoddy scholarship is a consequence of the “publish or perish” mentality that exists at too many institutions. No discipline is immune, we see now. But when a minimum number of scholarly articles per year sets one of the main standards for professional review, we should expect to see bad scholarly results.
It is a shame for careerism to take precendence over quality but it also flies in the face of what the telos, the purpose, of research is supposed to be. Scholarly research in all disciplines is meant for the purpose of advancing human knowledge (speculatively) and human well-being (more practically). Fabricated data and misrepresented results do neither. And yet the purpose of scholarship often gets reduced to career advancement or worse, fame for the researcher. Quantity of research, rather than quality, wins out in a careerist environment. Yet another Economist article from September spells out the more extreme consequences of such a culture. A new criminal market has emerged in China with the purpose of faking research, plagiarizing articles, and publishing fraudulent material. The article notes that “the cost of placing an article in one of the counterfeit journals was put to $650. Purchasing a fake article cost up to $250. Police said the racket had earned several million yuan ($500,000 or more) since 2009. Customers were typically medical researchers angling for promotion.
It is not inappropriate for research institutions to set certain standards but this must be based on quality of research, not quantity. One way to judge quality is by the number of citations in future scholarly articles. In the Chinese case noted above, the authors note that China ranks 14th in average citations per SCI (Science Citation Index) article, “suggesting that many Chinese papers are rarely quoted by other scholars.” Another solution is to reduce the amount of scholarship out there, and thus hopefully improve the quality of peer-review, no easy task with the increase in scholarly journals, especially open-access journals, and the possibility of self-publication. Another solution, offered by the article cited above, is to allow more space in scholarly journals for negative results (which account for only 15% of the publications in scholarly journals) and “uninteresting studies.” But the best way to improve the quality of scholarly research is remove incentives (like minimum numbers of publication) that encourage low quality research and dishonesty. Research, whether in the natural sciences, social sciences, or humanities, should not be a means of promoting the researcher. Maybe it is time for “publish or perish” to perish.
Beth, I thought this was fascinating! Thanks for bringing it to our attention! I did have a few concerns though, and the first is the idea that: “The culture of fabrication, misrepresentation, exaggeration, and just plain shoddy scholarship is a consequence of the “publish or perish” mentality that exists at too many institutions.” I don’t think we can say that “publish or perish” mentality CAUSES a culture of fabrication, misrepresentation, exaggeration and shoddy scholarship. It certainly adds an urgency that may lead some to sacrifice quality for quantity, but I am hesitant to name it as a cause as though actual people were not involved in making actual decisions…and perhaps sometimes just being lazy in their scholarship because they had other things they wanted to do more than research. I’m also uncertain if this article concerning sciences can easily be applied to the humanities. The concerns about replicability, for example, don’t seem to have a parallel in theology.
I do, however, have a concern about the gatekeepers for the publishing world, and here may be a similarity…too much trusting and not enough verifying, for certain scholars, and not enough trusting for others whose voices the gatekeepers would like to prevent from joining the conversation. But that is a topic for another time.
Another issue at play is probably grant funding. NIH is reporting funding rates this year around 14-15%. In my own subfield of physics, funding rates are around 10%. Most grants will last for 3 years or so, but they only cover maybe 1/4-1/3 of an early-career researcher’s time – less for later-career researchers with higher salaries. Those numbers would mean that a scientist needs play a major role on 10 to 14 proposals a year, and the number of publications is also likely to play a major role in deciding whether a proposal is funded… So, there’s less time than ever for actually doing research, but the expectations for amount of research done, measured in publications, are the same, if not driven higher than ever by the degree of competition. It’s kind of a scientific research equivalent of the “social limits to growth” effects brought up here: http://catholicmoraltheology.com/dealing-with-inequality-and-social-limits-to-growth-you-should-read-this/