Does science need 'open evaluation' before 'open access?'

Posted By News On November 14, 2012 - 11:30pm

In an editorial accompanying an ebook titled "Beyond open access: visions for open evaluation of scientific papers by post-publication peer review," Nikolaus Kriegeskorte argues that scientists, not publishers, are in the best position to develop a fair evaluation process for scientific papers.

The ebook, published today in Frontiers, compiles 18 peer-reviewed articles that lay out detailed visions on how an transparent, open evaluation (OE) system could work for the benefit of all science. This transparency is paramount because the evaluation process is the central steering mechanism of science and influences public policy as well.

The authors are from a wide variety of disciplines including neuroscience, psychology, computer science, artificial intelligence, medicine, molecular biology, chemistry, and economics.

"Peer reviews should be made public information, like the scientific papers themselves. In a lot of ways, the network of scientific publications is similar to a neural network. Each paper or peer review could be seen as a neuron with excitatory and inhibitory connections, and this information is vital in judging the value of its results," says Kriegeskorte, researcher at the University of Cambridge.

Yet unlike the richly interactive and ongoing activity in a neural network, the current peer review process is typically limited to 2-4 reviewers and remains fossilized in pre-publication phase. According to Kriegeskorte, secretive and time-limited pre-publication peer review is no longer the optimal system. He writes, "Open evaluation, an ongoing post-publication process of transparent peer review and rating of papers, promises to address the problems of the current system. However, it is unclear how exactly such a system should be designed."

To explore possible design solutions for OE, Kriegeskorte and his student Diana Deca launched a Research Topic at Frontiers—where a researcher chooses a topic and invites his or her peers to contribute an article. And while Kriegeskorte was expecting a diverging series of solutions, he says that the visions turned out to be largely convergent: the evaluation of papers should be completely transparent, post-publication, perpetually ongoing, and backed by modern statistical methods for inferring the quality of papers; and the system should provide a plurality of perspectives on the literature.

According to Kriegeskorte, transparency is the antidote to corruption and bias. "Science will continue to rely on peer review, because it needs explicit expert judgments, rather than media buzz, to evaluate papers." He suggests a two-step process based on a fundamental division of powers. In the first step after a manuscript is published online, anyone can publicly post a review or rate the paper. In the second step, independent web-portals to the literature combine all the evaluations to give a prioritized perspective on the literature.

The scoring system could simply be an average of all of the ratings. But different web-portals would weight varying scales and individual reviewers differently. In the end, he believes, "the important thing is that scientists themselves take on the challenge of building the central steering mechanism for science: its evaluation system."

Averaging polls works, as we saw in this year's American presidential election. Regression toward the mean was deemed scientific so why not use it in science?

Very much agreed, the design of a meaningful post-publication peer review system is a huge step in advancing oen science. My team has been researching why so many of these systems fail, and has built an expermental system at which has gotten significant traction in the stem cell research community.

The trick, as described above, is capturing the data. Research scientists have historically been extremely hesitantto share opinions about published research online. Once this problem is solved (and our team is fairly close), it's a matter of making the resulting data available in a consistent formatthrough open APIs so that it can be aggregated into a meaningful reputation metric, something that groups like Total Impact are currently leading on.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Allowed HTML tags: <h> <a><em><img><strong><iframe><table><object><cite><p><br><i><b><center><ul><li><div><html5:figure><html5:figcaption><td><tr>
  • Lines and paragraphs break automatically.

More information about formatting options

Sorry, we know you're not a spambot, but they're out there