Friday, March 4, 2011

Is liquid publication and peer review the wave of the future in science?

As a professor and researcher, one of my most time-consuming tasks is performing peer review. That is, reading and commenting on articles submitted to conferences and journals, proposals submitted to granting agencies, theses submitted for degrees, and promotion dossiers (which themselves require reading several papers).

In this post I will point out the strengths and weaknesses of the current peer review systems, as I see them, and make some suggestions for change. The suggestions include adopting what is called liquid publication in parallel with the classic gate-keeping style of peer review. Liquid publication focuses on post-publication review and citation; blogs are one form of this.

Peer review is essential to science, it serves several purposes including:
  • Forcing researchers to adhere to standards of good research, since they don't want to be embarrassed by being caught making mistakes.
  • Slowing down the publication process sufficiently to allow for 'sober second thought'. If publication was always instant, I am sure that overall quality even from the best scientists would be lower. When one reads one's own material several months later, one can always make improvements. This is a drawback to blogging, although as I will discuss later, blogging also has advantages.
  • Providing feedback that will help researchers do better work. It is common for a reviewer to suggest improvements to the way a scientist conducts their research, or to point out useful new sources of information.
  • Helping prevent bad science from getting into the public record. There is a tendency for people to cite scientific papers as 'truth' even if they are wrong or have serious flaws; peer review can help reduce, although not eliminate, the occurrence of this.
  • Ensuring that at least some people read each paper. Quite often I have first been alerted to an innovation first through the peer review process. I think this is not a commonly-recognized benefit of peer review, but frankly I read more papers for peer-review purposes than I read in the normal course of research. This is, however, a double-edged sword as I will point out below.
Peer review is not without its drawbacks. Many of these are discussed in this extremely good article.
Here are my thoughts about the downside of the current peer-review system:
  • Not all reviewers have the same standards or are in the same 'schools of thought' meaning that a paper that would be accepted by one group of researchers would often be rejected by another. One of my papers that ended up being extremely widely cited was initially rejected, and this is quite common. Some of my recent work on Umple has been rejected by not being formal enough for reviewers' taste, or not having yet been rigorously examined in controlled experiments, forcing the core idea to be suppressed while that process, which can take years, is undertaken.
  • Dogma still reigns, so papers that might be correct, but nonetheless challenge received thinking, will often not see the light of day, or will never be submitted to start with because the researcher doesn't want to risk his or her reputation.
  • Papers are forced to fit a particular style, so very short papers that just make a quick but interesting point, but don't cite detailed background information, or have a full analysis, would not be considered.
  • Papers are often poorly written, so reviewers get bogged down dealing with the writing, not the core scientific ideas. Ideally, there should be a process that encourages editing by a professional editor before submission to scientific peers.
  • Too many conferences limit the number of papers too stringently, so peer review serves to select the best-of-the best, rather than simply to reject flawed papers. Good-but-not-superb papers therefore re-enter the review cycle over and over, consuming even more time of reviewers until they are eventually published.
  • Not all peer-review is equal, and it is not always easy to tell whether papers are being given fair or deep reviews or just cursory ones. There are lower-quality venues that promise peer review, but where one suspects one's paper only gets a cursory look-through since the venue is eager to have papers published. Once can't tell the quality of peer review since you don't know who is doing it.
  • The sheer volume of reviews that researchers have to undertake, often in areas that are not central to their expertise, results in overly rapid or shallow reading. Many times at least one review I have received has suggested I make a certain point in the paper that was in fact clearly made, suggesting that the reviewer didn't really have the time to do a good review. And I sympathize: To properly review a long paper can take several hours of uninterrupted time, which is a precious resource for a busy professor. I have sometimes been forced to read a paper over a couple of weeks, a few pages at a time. It is very easy to lose one's way and miss important points.
  • The sheer volume of reviews also takes a huge amount of time away from conducting research, Perhaps this is the biggest problem.
What is the solution?

While there will always be a place for peer-reviewed publications as we know them now, I think that there are two ways forward:

The first way forward is to encourage liquid publication, as described in this article that I mentioned earlier.  Liquid publication moves the review process to follow rather than precede publication. In fact, it is the process I am using right now in writing this blog entry. If you add comments, you are writing a public review; if you link to this entry and comment on it elsewhere, you are also reviewing it and citing it. Many other forms of writing can also be subject to liquid publication, including software and spreadsheets containing data.

Badly written liquid publications will not be read. People will just stop as soon as they find the work poorly written. There will be few or no comments and no citations, forcing the author to improve the publication. A system enabling the reader to see the history of versions, as appears in wikis, but generally not blogs, should be implemented to facilitate such improvements.

Bad science will receive negative comments, or will also be ignored.

Good science and good writing will be rewarded by being read, commented on and cited.

A major benefit of liquid publication for the reviewer is that reviewing occurs in the the process of normal research, rather than as an add-on activity requested by an editor. Indeed, I make it a point to add critical comments to blogs and other liquid publications where I think the writer has taken care in what they wrote, and where I have a point of view I think worth sharing.

There is nothing to stop conference being built around liquid publication that occurs prior to the conference. People could publish their contributions openly on their own blog, the conference's website, or both (summary in one, full article in the other) and submit a link to the program committee. People interested in the conference could freely comment and the program committee would still be tasked to ensure every paper had at least some comments. Speakers could be selected based on the public comments received.

The other way forward is to work towards fixing the current peer review system, including:
  • Making parts of reviews completely open. I think reviewers should be asked to divide their reviews into two parts, the first parts would be suggestions for cosmetic changes; these would just be for the benefit of the author and editors. The second part would be critique of the method and conclusions. These would be published with the paper, although reviewers would have a chance to edit them prior to publication of the final version of the paper.
  • Having the names of reviewers published at least some of the time. Keeping them private has the benefit of encouraging free comment, and avoiding animosity, but I think that openly naming reviewers, on a limited basis should be experimented with to encourage higher quality reviews, and to make authors aware of which reviewers are hostile to them.
  • Routinely allowing reviewers to request another review if they believe they are being discriminated against based on school-of-thought differences or dogma. 
  • Enabling in-line comments when reviewing. Currently it is time consuming to make detailed comments on a paper because one must copy and paste text one wants to critique. A process that emulates the 'track changes' and commenting-with-pointer-to-text facilities in word processors would make reviewing much more efficient. Incidentally, that style of commenting would be a useful adjunct to blogging software too.
  • Accepting all papers that are not seriously flawed, rather than just the best-of-the-best. As long as the comments are visible, this would prevent bad science from being propagated indiscriminately.
  • Attaching mediated comments to on-line versions of all papers, so readers can review them after the fact and link the papers to other relevant information.
  • Having a system whereby a reviewer can quickly bounce a paper back for professional editing of the language, at the author's expense, before agreeing to review it for its scientific content.
  • Encouraging shorter papers, forcing authors to make their key points in less space, with background information linked on the web so readers can instantly find it if they need a deeper understanding.
  • Publishing comprehensive standards guides for each conference and journal, outlining the expected structure of each paper, and forcing reviewers to acknowledge having read the guide before they download the paper to review.
I think that liquid publication and 'gatekeeping' peer-review, as improved by my suggestions above, can co-exist and in fact blend together. Although I think that eventually the liquid, post-hoc review, approach will come to dominate.

I certainly encourage all scientists to start blogging, and to cite both liquid and classic publications in both styles of publication.

By the way, the reader might be interested in how much time I spend on peer reviewing: In 2010 I reviewed 33 papers, 5 theses (not including those of my students), 3 grant proposals and 2 promotion dossiers. The year before I reviewed 60 papers, 5 theses, 3 grant proposals and 4 promotion dossiers. Some years I have reviewed 100 items. On average each takes about two hours (some as short as half an hour, some a full day) including the process of writing the review. So overall it takes from 150 to 200 hours a year, or 4-5 weeks of my time, were I to be able to do it all in one intense period.

2 comments:

  1. Great post. Here's the "cosmetic change" part of my review:

    "Routinely allowing reviewers to request another review if they believe they are being discriminated against based on school-of-thought differences or dogma." -> I assume you meant to write "authors" instead of "reviewers"

    -Matthias Hauswirth

    ReplyDelete
  2. Very nice synthesis of why and how peer review could change. Unfortunately, LiquidPub project has finished, but some ideas are still alive in this special issue - http://www.frontiersin.org/computational_neuroscience/researchtopics/Beyond_open_access_visions_for/137 or this group - http://force11.org/

    ReplyDelete

Note: Only a member of this blog may post a comment.