Pages

Monday, July 18, 2011

Do we need an alternative to peer-reviewed journals?


Original Post http://arstechnica.com/science/news/2011/07/do-we-need-an-alternative-to-peer-reviewed-journals.ars

The past week has seen rather lively discussion about the scientific publishing industry and peer review. Peter Murray-Rust has produced a series of posts about his issues with the process (start here then work your way forward), Joe Pickrell described his problems with peer-reviewed journals at Genomes Unzipped, and Stuart Lyman has a letter to the editor in Nature Biotechnology (subscription required). (It's also a topic that we've considered in the past.) As Wired's Dan MacArthur put it, "it is a source of constant wonder to me that so many scientists have come to regard a system [the existing publication process] that actively inhibits the rapid, free exchange of scientific information as an indispensable component of the scientific process." So what's the problem, and what should (or can) we do about it?

Who gets to read the science?


Until the relatively recent advent of open-access publishing, readers have been expected to foot the costs of the publishing process. A year of a single journal can cost a library six-figure sums. Noninstitutional users can expect to be charged around $30 for a single article, as can academic users whose library doesn't subscribe to the Journal of Obscure Factoids. If you've spent your life in well-funded research institutes, this might not seem like an issue. But, for those at smaller schools or from less-affluent countries, this can be a substantial barrier to being able to participate in the exchange and dissemination of scientific ideas. These paywalls also stand between taxpayers and the research they've supported.

As Stuart Lyman's letter to Nature Biotech points out, the price of access has also become a problem for private sector research. The large pharmaceutical companies that used to have well-stocked libraries have downsized or shut them down as part of their relentless cost-cutting. For small or even medium-size companies, the costs of institutional subscriptions quickly adds up.

The access issue is the one that's seen the most progress, with the creation of open-access journals where the publication costs are met by the authors, not the readers (authors had been paying fees to publish in some journals anyway). The effort to make publicly funded discoveries publicly available has also been gaining ground. From 2008 onwards, recipients of NIH funding have been subject to NIH's Public Access Policy, which requires that any publications that arise from its funds appear in either open-access journals or be placed in PubMed Central within 12 months. Similar policies have been implemented by other national funding bodies and private foundations, as well as individual institutions.

Some publishers have made attempts to have Congress overturn NIH's policy. It's an understandable move; for-profit publishers fear for their bottom line, while other journals are published by scientific societies, many of which depend on subscription revenues for basic operations. But this revenue model may have been dying on its own. The bulk of scientific literature is consumed electronically rather than in hard copy. Sure, it used to be that disseminating new scientific ideas involved printing lots of copies and shipping them, but how true is that in 2011?

Paying for peer review

Part of the price of a journal goes to cover the process of peer review, which has also been the subject of criticism. It costs both time and money, and weeks or months can pass between submitting a paper and having it accepted. Reviewers have to be found, and they are expected to spend hours doing a thorough job without compensation.

Despite all this effort, there are worries that the process doesn't work any better than chance. A common criticism is that peer review is biased towards well-established research groups and the scientific status quo. Reviewers are unwilling to reject papers from big names in their fields out of fear, and they can be hostile to ideas that challenge their own, even if the supporting data is good. Unscrupulous reviewers can reject papers and then quickly publishing similar work themselves.

Alternatives to the current system have been examined, but MIT didn't think much of their experiment with open peer review, and Nature's testing of these waters didn't really pan out either. Nature did find overwhelming support from authors, who felt that open peer review improved their papers; a more recent study from the Publishing Research Consortium that we reported on found similar things. These comments indicate that abandoning peer review entirely isn't a viable solution. To some extent, it has already happened with the arXiv, which is filled with all sorts of crap that makes The Daily Mail's science pages seem reputable.

Beyond arranging for peer review, journals act as gatekeepers—they screen submissions for interest or importance as well as just the veracity of the work. That, too, has provoked a response. PLoS ONE attempts to take the first part out of the equation:

Too often a journal's decision to publish a paper is dominated by what the Editor/s think is interesting and will gain greater readership—both of which are subjective judgments and lead to decisions which are frustrating and delay the publication of your work. PLoS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership (who are the most qualified to determine what is of interest to them).

The problem with this (from where I'm sitting) is that the sheer volume of publications is already almost impossible to manage, making a degree of selectivity valuable. There's always going to be a place for highly selective publishing outlets for work deemed "important"—that's just human nature.

But highly selective journals feed into a final problem area: metrics, impact, and tracking. For better or worse (and I think there's a very strong case for it being worse), academic career progression and research funding are explicitly tied to where a scientist publishes their work. This is done through the use of impact factors, which we've written about extensively (and critically) in the past.

They're a very imperfect measure. Journals that publish reviews as well as research articles can increase their impact factor, and publishing retractions or corrections does so as well. We have the tools to do a better job now, thanks to the move online. There have been experiments with algorithms like PageRank, and one could easily see something that works like Facebook's "like" or Google's "+1" being used. But as a researcher's funding success and promotion remain tied to their publications, what's to stop them from gaming the system? (I envision researchers organizing teams of undergrads to +1 their bibliography.)

Taking a more holistic view towards an individual's career would certainly solve this problem, and it's a solution I'm all in favor of. Until that happens though, I don't think we're going to see things change much.

Although publishing will remain critical, it's hard to escape the sense that it's increasingly trailing behind the scientific community. Twitter, FriendFeed, Mendeley, and now Google+ have become venues where serious discussion about scientific work takes place. We're already seeing friction at some conferences; not everyone is happy having their talk livetweeted, and the backchannel can be cruel to speakers at times. But social media isn't going anywhere, and neither is academic blogging.

Recognizing the legitimacy of these will be a critical challenge for academia, but it might happen organically as a younger cohort replaces the boomers currently running the show. Fixing peer review is something that shouldn't wait, though. Unfortunately, it's easier to say than to do; I don't have any ready suggestions for how.

No comments: