linking back to brembs.net






My lab:
lab.png
I'm starting to dig a little deeper into the recent 42 page tome of Richard Poynder on scholarly publishing today. Not sure how much more time I can spend on this, though. This post will be mainly about the first half up until about page 19-20 or so.

On page 12 Richard writes:
the assumption [...] is that the more papers a journal accepts the lower will be the quality of those that it publishes.
Clearly, the assumption does seem intuitive: after all, the scarcer something is, the more value we attrribute to it. This intuitive notion works for new night clubs which create long lines in front of their doors and it already worked when the French introduced potatoes. The two examples also show how our intuition can fail us: scarcity does confer value but not necessarily quality. In scholarly publishing, the connection between scarcity in perceived 'quality' is very indirect. Specifically, for the journals with the highest rejection rates, it means the quality perceived by the editor, not the scientists. Nature editor Henry Gee explains this connection beautifully as always (and fittingly illustrates it with a picture of James Bond villain Dr. No):
I reject at least four in every five manuscripts straight off the bat, before review; most of the rest perish in review. In the end, barely one in twenty new submissions makes it through to publication.
In less prosaic words: when you submit your manuscript, you have a 5% chance of publishing in Nature (it's actually ~8%, but let's stick with Henry's number for now), but once you got past the editor, it's a whopping 25% (other colleagues cite 60%, but let's stay with Henry here too). In other words, the peers are much more forgiving than professional editors.

Henry Gee goes on to explain why rejection must remain high:
One reason for maintaining a very high rejection rate is to ensure that the quality of the material we publish is always high (which I maintain it is, even when one accounts for the fact that editors are only human and prone to err). This creates a feedback – if the quality is high, and is perceived to be high, people will tend to submit their best stuff to us. Were we to loosen our belts, lowering the bar (and increasing pagination) the quality would lessen, people would send us their just-about-okay stuff as well as their best stuff, and, well, it would be the end of civilization as we know it.
In other, less prosaic words: scarcity creates the impression of value, which leads to 'perceived quality' which leads to scientists submitting the work they think (or know) the editor will like to Nature. Once the editor likes it, peer-review is the lower hurdle.

One may thus say that the 'letter to the editor' accompanying the submitted manuscript may be more decisive for the acceptance of a paper in a GlamMag than the paper itself.

Richard Poynder mentions the notorious 'wind setdown paper' in PLoS One as an example that a different way of selecting papers - sending virtually all manuscripts out for peer-review - is a less rigorous method as it leads to the acceptance of 70% of all submissions and not just 8%. In all fairness, the selection process differs in more ways than just this one. Specifically, reviewers for PLoS One are not asked how they feel about the paper or whether they find it interesting, but only whether or not "the science in this paper been done well enough to warrant it being entered into the scientific literature as a whole". From the interviews he conducted, Richard is not quite sure what this last statement means and indeed, while the instruction works in the large majority of submissions, it gets very tricky in a few borderline cases. In essence, what the instruction means is that the methods have to be understandable and transparent enough for a reviewer to believe the method can work. The statements in the manuscript have to be backed up by the data, which means that alternative explanations have to be ruled out. For virtually all 'regular' research papers from 'regular' institutions this means: do what you've been trained to do and you'll get your paper published, if you fix the few things that you didn't catch initially but the reviewers did. Which is exactly the way scientific publishing should be. However, with over 6000 published papers per year (meaning ~3k rejected papers a year!), of course there will be one or the other 'rogue' paper that did get published when maybe it shouldn't have been. Richard thinks the wind setdown paper is one of them and cites me as Academic Editor of PLoS One as corroborating evidence:
So was there a failure on the part of the reviewers? Brembs believes so. "I think two mistakes were made," he emailed me last year. "For one, there was no proper reference to the particular mythology the authors referred to. Second, there was no reference to alert the reader that the mythology in question lacks empirical support. Both mistakes should have been caught by the reviewers, or by the academic editor."
In other words: adding two references would have fixed this paper, in my eyes and nobody had any objections against the science in the paper. Apparently, Richard thinks that two missing references are a good example of 'inadequate' peer-review standards. To be fair, he cites other papers, but so can I: the now infamous arsenic paper in Science and, of course, my favorite 'worst paper in the field' published in Nature. No amount of reference adding could possibly fix these papers, so one could make the argument that the type of editorial selection taking place at the GlamMagz is actually much less adequate than the genuine peer-review at PLoS One.

However, I don't want to make the same mistake I believe Richard is making: arguing from anecdotes. Are there any hard data which show that the selection process (which does include some peer-review also at the GlamMagz) in some journals is better than in others? This is not an easy task and fraught with its own difficulties, but any attempt at arriving at these data is surely to be preferred to enumerating personal favorites and other anecdotes? Unfortunately, even in 2011, there are very few options to quantitatively assess quality, mainly because what is a great paper for you may be horribly boring, dense, irrelevant or incomprehensible for me. But maybe we can find common ground on the other end of the spectrum? One lowest common denominator could be retractions: papers that have been retracted just simply cannot be of high quality, surely everyone would agree with that? So the journals which retract more papers than others have a worse selection process, letting more low-quality papers pass their supposed 'quality filter'? So which are the journals with the most retractions? PNAS, Science and Nature, by a wide margin. Thus, looking at data rather than anecdotes, GlamMagz fail at their 'perceived quality' control - all that remains is intuitive value by artificial scarcity together with deplorable social feedback loops.

To my knowledge, so far, no factual relationship between rejection rate in scholarly publishing and whatever one could construe as 'quality' has been offered that would stand any scrutiny.

Neither the GlamMag editor-selection system, nor genuine, traditional peer-review nor a journal ranking derived from such flawed approaches is able to provide us with the sort of filter we need to keep up with all the exciting new scientific discoveries. Therefore, all that is required at the stage of publication is a system that forces a few experts to read a submission, make sure that no or few and unlikely alternative explanations remain for the data presented and then release the work to the public (some would argue that not even that is required anymore, but I'm not -yet?- convinced). Most of the obvious crank papers will flunk in this process and the few remaining papers that do get through can be weeded out later (and invariably are, no matter where they have been published). What is indeed lacking and in dire need is a modern, information technology-based search, sort and discover (SSD) tool that assists researchers in navigating the scholarly landscape of today. Why are the people constantly trying to fix a modern problem with ancient, ineffective tools, instead of using adequate, modern technology? Sony did not see the iPod coming, Borders did not see Amazon coming, and so on. Today's corporate scholarly publishing industry does not see information technology coming. Trying to solve new problems with ancient technology will inevitably lead corporate publishers of today going the way of all the brick-and-mortar companies that failed to see the internet coming.

Like the iPod or the iPhone, PLoS has its flaws, but at least they attempt to use modern technology to offer an improved user experience.
Posted on Wednesday 09 March 2011 - 21:47:31 comment: 0
{TAGS}


You must be logged in to make comments on this site - please log in, or if you are not registered click here to signup
Render time: 0.1135 sec, 0.0047 of that for queries.