I have some serious gripes with peer review. It brought us wonder such as the 1996 Michael Bellesiles artickle that started the entire Arming America fraud. Peer review means that ideas that conform to the status quo or that meet political needs of the elite are more likely to get published. In the scuiences, the problem has been severe for a long time. This article from Journal of Royal Society of Medicine:
There may even be some journals using the following classic system. The editor looks at the title of the paper and sends it to two friends whom the editor thinks know something about the subject. If both advise publication the editor sends it to the printers. If both advise against publication the editor rejects the paper. If the reviewers disagree the editor sends it to a third reviewer and does whatever he or she advises. This pastiche—which is not far from systems I have seen used—is little better than tossing a coin, because the level of agreement between reviewers on whether a paper should be published is little better than you'd expect by chance.1
That is why Robbie Fox, the great 20th century editor of the Lancet, who was no admirer of peer review, wondered whether anybody would notice if he were to swap the piles marked `publish' and `reject'. He also joked that the Lancet had a system of throwing a pile of papers down the stairs and publishing those that reached the bottom. When I was editor of the BMJ I was challenged by two of the cleverest researchers in Britain to publish an issue of the journal comprised only of papers that had failed peer review and see if anybody noticed. I wrote back `How do you know I haven't already done it?'"...
Peer review might also be useful for detecting errors or fraud. At the BMJ we did several studies where we inserted major errors into papers that we then sent to many reviewers.3,4 Nobody ever spotted all of the errors. Some reviewers did not spot any, and most reviewers spotted only about a quarter. Peer review sometimes picks up fraud by chance, but generally it is not a reliable method for detecting fraud because it works on trust. A major question, which I will return to, is whether peer review and journals should cease to work on trust."...
The evidence on whether there is bias in peer review against certain sorts of authors is conflicting, but there is strong evidence of bias against women in the process of awarding grants.5 The most famous piece of evidence on bias against authors comes from a study by DP Peters and SJ Ceci.6 They took 12 studies that came from prestigious institutions that had already been published in psychology journals. They retyped the papers, made minor changes to the titles, abstracts, and introductions but changed the authors' names and institutions. They invented institutions with names like the Tri-Valley Center for Human Potential. The papers were then resubmitted to the journals that had first published them. In only three cases did the journals realize that they had already published the paper, and eight of the remaining nine were rejected—not because of lack of originality but because of poor quality. Peters and Ceci concluded that this was evidence of bias against authors from less prestigious institutions."
When I did an ngram in books.google.com for "peer review," it became apparent that the phrase is really not used until the 1940s and even then rarely. Can anyone point me to other great frauds besides Bellesiles that survived peer review?
O.M.G. you are about to be inundated with papers about Catastrophic Anthopogenic Global Warming... Incidentally to his analysis of the (badly done) statistics chosen by climatologist Michael E Mann to develop the "hockey stick graph", statistician Edward Wegman, did an analysis of the networking among climatologists during peer review. Worth checking out. Wegman, Barton report, 2006 or so
ReplyDeleteThis is a long article from 2016. Saving Science. The beginning is a lot of history of the Manhattan Project, but if you scroll down the section titled "Einstein, We Have a Problem" there is a general discussion of the problems in the scientific literature, and it does include references.
ReplyDeleteHere is one that they actually quote: Offline: What is medicine's 5 sigma? - The Lancet
"The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. As one participant put it, “poor methods get results”. The Academy of Medical Sciences, Medical Research Council, and Biotechnology and Biological Sciences Research Council have now put their reputational weight behind an investigation into these questionable research practices."
Medical research in journals that cannot be reproduced by pharmaceutical companies is one of the major problems.
Do you remember the hoax papers?
ReplyDelete"One paper, published in a journal called Sex Roles, said that the author had conducted a two-year study involving “thematic analysis of table dialogue” to uncover the mystery of why heterosexual men like to eat at Hooters.
Another, from a journal of feminist geography, parsed “human reactions to rape culture and queer performativity” at dog parks in Portland, Ore., while a third paper, published in a journal of feminist social work and titled “Our Struggle Is My Struggle,” simply scattered some up-to-date jargon into passages lifted from Hitler’s “Mein Kampf.”"
Source: https://www.nytimes.com/2018/10/04/arts/academic-journals-hoax.html
"Peer review means that ideas that conform to the status quo... are more likely to get published."
ReplyDeleteWell, yeah, because "status quo" science is generally correct. A lot of people have worked pretty hard for a long time to get it right. It's like the genome of an organism. Dissenting papers, like DNA mutations, are mostly wrong. When they are right, they're. extremely valuable , but that's not the usual outcome.
One should note that the most famous "just-so stories" about dissenters who were right when the status quo was wrong are from the long-ago period when a fair amount of "science" was just fine-sounding word-spinning, with no experimental support.
The Peters & Ceci study is intriguing. It suggests that a paper should be stripped of authorship information when sent for peer review. I reminds me of the art museum founded by a wealthy collector who insisted that the paintings be displayed without the artists' names: the work should be judged by itself, not by who made it.