For the better part of this year, Marcus Munafò and I have been working on a manuscript reviewing the empirical literature on journal rank and its impact on science. In early June we received a rejection letter together with three reviews from PLoS Biology. We are currently in the process of revising the manuscript in order to submit it to a different journal. In the light of the traffic and discussion on two posts about journal rank (or Thomson Reuters' Impact Factor, to be specific), one by Stephen Curry and one by DrugMonkey, we decided to release this submitted, non-revised version (our fifth internal version) to the public with all three reviews attached at the end. By now, this version is several months old and has since already seen some revising, especially new references have been added. We release this draft manuscript in an attempt to set a standard for the empirical evidence used in any debate on journal rank, even before our manuscript has passed formal peer-review.
Because it has yet to pass formal peer-review, I'd like to point out that despite of the controversial debate sparked by Stephen's original post (which echoes some of the evidence we cite in our manuscript), the main reason for rejecting our article stated by all three reviewers was not that there was anything wrong with our review of the current literature, but that we didn't write anything new:
Editor:
We are very sympathetic to the point of view presented here, but unfortunately, as the reviewers note, most of the issues raised in the paper have been covered extensively elsewhere and this article does not add significantly to the contributions of previous publications.
Reviewer #1:
While I am in agreement with the insidious and detrimental influences on scientific publishing identified and discussed in this manuscript, most of what is presented has been covered thoroughly elsewhere.
Reviewer #2:
The authors make sound points, and for doing so can rely on years of solid research that has investigated the pernicious role of journal rank and the impact factor in scholarly publishing.
Overall, I deem this a worthy and valid "perspective" that merits publication, but do want to make the following reservations.
The particular arguments that the authors make with respect to the deficiencies of the journal impact factor (irreproducible, negotiated, and unsound) have already been made extensively in the literature, in online forums, in bibliometric meetings, etc to the point that very little value is gained by the authors restating them in this perspective.
Most of the points dedicated to the retractions and decline effect, and the relation between journal rank and scientific unreliability are also extensively made in the literature that the authors cite.
In other words, very few new or novel insights are made in this particular perspective, other than to restate that which has already been debated extensively in the relevant literature.
Reviewer #3:
Brembs & Munafò claim that it is bad scientific practice to use journal rank (that is a scholarly publishing ecosystem in which there's some sort of hierarchy of journals) as an assessment tool. They are particularly concerned with journal rankings based on Thomson Reuter's Impact Factor (IF).
Their four conclusions are:
1) Journal rank is a weak to moderate predictor of scientific impact
2) Journal rank is a moderate to strong predictor of both intentional and unintentional scientific unreliability
3) Journal rank is expensive, delays science and frustrates researchers;
4) Journal rank as established by Thomson Reuters' Impact Factor violates even the most basic scientific standards, but predicts subjective judgments of journal quality.
I have problems with at least some of the interpretation of evidence used to support the first three of these (I think it'd be hard to find anybody who disagrees with the last one).
We think all three reviewers have had very valuable and competent suggestions and criticisms on some points of our manuscript and we are currently working on substantially revising it for submission elsewhere. Because of the largely positive tone of the reviews and the very specific criticisms they offered, we thought publishing the draft manuscript with the points raised by the reviewers (most of which we tend to agree with) would be valuable for the section of the scientific community which isn't so familiar with the data at our disposal with regard to journal rank (and the kind of data we still lack). We do have quite some work ahead of us and will likely not have a revised version ready for submission before October/November.We are very sympathetic to the point of view presented here, but unfortunately, as the reviewers note, most of the issues raised in the paper have been covered extensively elsewhere and this article does not add significantly to the contributions of previous publications.
Reviewer #1:
While I am in agreement with the insidious and detrimental influences on scientific publishing identified and discussed in this manuscript, most of what is presented has been covered thoroughly elsewhere.
Reviewer #2:
The authors make sound points, and for doing so can rely on years of solid research that has investigated the pernicious role of journal rank and the impact factor in scholarly publishing.
Overall, I deem this a worthy and valid "perspective" that merits publication, but do want to make the following reservations.
The particular arguments that the authors make with respect to the deficiencies of the journal impact factor (irreproducible, negotiated, and unsound) have already been made extensively in the literature, in online forums, in bibliometric meetings, etc to the point that very little value is gained by the authors restating them in this perspective.
Most of the points dedicated to the retractions and decline effect, and the relation between journal rank and scientific unreliability are also extensively made in the literature that the authors cite.
In other words, very few new or novel insights are made in this particular perspective, other than to restate that which has already been debated extensively in the relevant literature.
Reviewer #3:
Brembs & Munafò claim that it is bad scientific practice to use journal rank (that is a scholarly publishing ecosystem in which there's some sort of hierarchy of journals) as an assessment tool. They are particularly concerned with journal rankings based on Thomson Reuter's Impact Factor (IF).
Their four conclusions are:
1) Journal rank is a weak to moderate predictor of scientific impact
2) Journal rank is a moderate to strong predictor of both intentional and unintentional scientific unreliability
3) Journal rank is expensive, delays science and frustrates researchers;
4) Journal rank as established by Thomson Reuters' Impact Factor violates even the most basic scientific standards, but predicts subjective judgments of journal quality.
I have problems with at least some of the interpretation of evidence used to support the first three of these (I think it'd be hard to find anybody who disagrees with the last one).
We urge all interested parties to pay special attention to the references we cite, not just our own summary of the published results. The interpretation of data is sometimes controversial, which is precisely the reason we cite all the data which gave rise to our conclusions. We would be delighted to receive additional, competent criticism of our reading of the empirical data.
We would hope that by releasing our draft manuscript early, especially the questions where we lack sufficient data to answer would inspire colleagues to attempt to collect that data and help us make more informed decisions with regard to what is arguably one of the most important infrastructures in all sciences and many humanities, our scholarly communication system, or rather the lack thereof.
Posted on Tuesday 14 August 2012 - 14:11:57 comment: 0
{TAGS}
{TAGS}
You must be logged in to make comments on this site - please log in, or if you are not registered click here to signup
Render time: 0.1889 sec, 0.0069 of that for queries.