Symbool_wijsheid_dHont_detailThe cover story in the most recent Economist turns a critical eye to the sloppy standards of contemporary scientific scholarship. “Modern scientists are doing too much trusting and not enough verifying,” the article states, “to the detriment of the whole of science and of humanity.”

One problem is increasingly more experiments are not replicable. Researchers at a biotech firm called Amgen found last year that only 53 “landmark” cancer studies were replicable. The authors state that in the quest for tenure and career advancement, “replication does little to advance a researcher’s career.” Another problem is fudging the data. One in three researchers apparently know of a colleague who has misrepresented her results. And yet another problem is with the peer review process itself, which often fails to catch critical errors. We’ve long known about the problems with peer review in the humanities and social sciences (which made Alan Sokal famous) but the article notes that “when a prominent medical journal ran research past other experts in the field, it found that most of the reviewers failed to spot mistakes it had deliberately inserted into papers, even after being told they were tested.” Two weeks ago, the Economist featured a repetition of the Sokal experiment in the “hard” sciences. John Bohannon, a biologist at Harvard, submitted a fabricated article “bursting with clangers in experimental design, analysis and interpretation of results” to 304 peer-reviewed journals. The study was rejected by only 98 and was accepted by 157, with 49 not responding. This coupled with the enormous number of scholarly articles being published leads to an inadequate process for judging the veracity of data and the quality of scholarship.

The culture of fabrication, misrepresentation, exaggeration, and just plain shoddy scholarship is a consequence of the “publish or perish” mentality that exists at too many institutions. No discipline is immune, we see now. But when a minimum number of scholarly articles per year sets one of the main standards for professional review, we should expect to see bad scholarly results.

It is a shame for careerism to take precendence over quality but it also flies in the face of what the telos, the purpose, of research is supposed to be. Scholarly research in all disciplines is meant for the purpose of advancing human knowledge (speculatively) and human well-being (more practically). Fabricated data and misrepresented results do neither. And yet the purpose of scholarship often gets reduced to career advancement or worse, fame for the researcher. Quantity of research, rather than quality, wins out in a careerist environment. Yet another Economist article from September spells out the more extreme consequences of such a culture. A new criminal market has emerged in China with the purpose of faking research, plagiarizing articles, and publishing fraudulent material. The article notes that “the cost of placing an article in one of the counterfeit journals was put to $650. Purchasing a fake article cost up to $250. Police said the racket had earned several million yuan ($500,000 or more) since 2009. Customers were typically medical researchers angling for promotion.

It is not inappropriate for research institutions to set certain standards but this must be based on quality of research, not quantity. One way to judge quality is by the number of citations in future scholarly articles. In the Chinese case noted above, the authors note that China ranks 14th in average citations per SCI (Science Citation Index) article, “suggesting that many Chinese papers are rarely quoted by other scholars.” Another solution is to reduce the amount of scholarship out there, and thus hopefully improve the quality of peer-review, no easy task with the increase in scholarly journals, especially open-access journals, and the possibility of self-publication. Another solution, offered by the article cited above, is to allow more space in scholarly journals for negative results (which account for only 15% of the publications in scholarly journals) and “uninteresting studies.” But the best way to improve the quality of scholarly research is remove incentives (like minimum numbers of publication) that encourage low quality research and dishonesty. Research, whether in the natural sciences, social sciences, or humanities, should not be a means of promoting the researcher. Maybe it is time for “publish or perish” to perish.