linking back to brembs.net






My lab:
lab.png
Editors of schorarly, peer-reviewed journals often claim that somehow their choosiness is the most important verdict on the quality of a scientific manuscript. Points in case are Nature Neuroscience's peer-review policy, a recent Nature News article or a follow-up on the Nature blog "Nascent". However, data on the 'impact' or quality of papers published in these very choosy journals varies greatly. Therefore, I have a suggestion on how to judge the performance of an editor. My suggestion requires that all peer-reviewed scientific primary literature is deposited in some database before any subjective editorial choice has been made. An example would be PLoS One, but any such database would do. Then, editors can thumb up or thumb down papers after they have been vetted by peers and promote or demote papers according to their judgement, very similar to acceptance and rejections in so-called high-end journals of today. Since all choices (also rejections!) are recorded, each editor (or goup of editors) will establish a track record. In a way, this is similar to the concept of the Faculty of 1000. Obviously, this will provide a great incentive to maximize their reliability as gatekeepers of scientific quality. How can their performance be measured? By counting downloads, citations, trackbacks, comments, ratings, media coverage, Fac1000 mention or any other measure deemed relevant of the papers they accepted/rejected.
That way, everybody would get their cake and eat it too: seemingly objective performance measures for both scientists and editors. Wouldn't that be fair?
Posted on Saturday 05 July 2008 - 12:42:51 comment: 0
{TAGS}

Render time: 0.0634 sec, 0.0045 of that for queries.