linking back to brembs.net






My lab:
lab.png

ResearchBlogging.orgIn response to my last post, Dwight Kravitz from the NIH alerted me to his paper on a similar topic: Toward a new model of scientific publishing: discussion and a proposal. His paper contains some very interesting data, such as this analysis of citations and journal rank:





The left-skewed form of the data is of course nothing new, but their analysis of how predictive journal rank is for actual citations opens a new aspect, I think:
Our evaluation reveals that far from a perfect filter, the distribution of citations largely overlaps across all six journals (Figure 2). We then asked whether the citation count of a paper could predict the tier at which it was published and found that between adjacent tiers this could only be achieved at 66% accuracy and between the top and third tier at 79%2. Thus, even given the self-reinforcing confounds, the journals tiers are far from a perfect method of prioritizing the literature.
So even if you look at very different tiers in the hierarchy, there are more than 20% of all papers that receive too many or too few citations than 'expected' from the journal they've been published in.

Whatever way you look at it, journal rank is completely anachronistic and must go.

Their paper contains a number of absolutely lovely quotes, some of which I just have to showcase:
Scientific papers are published through a legacy system that was not designed to meet the needs of contemporary scientists, the demands of modern publishing, or to take advantage of current technology. The system is largely carried forward from one designed for publishers and scientists in 1665
[...]
In total, each paper was under review for an average of 122 days but with a minimum of 31 days and a maximum of 321. The average time between the first submission and acceptance, including time for revisions by the authors was 221 days (range: 31–533). This uncertainty in time makes it difficult to schedule and predict the outcome of large research projects. For example, it is difficult to be certain whether a novel result will be published before a competitor’s even it were submitted first, or to know when follow up studies can be published. It also makes it difficult for junior researchers to plan their careers, as job applications and tenure are dependent on having published papers.
[...]

Scientific progress is supposed to be largely incremental, with each new result fully contextualized with the extant literature and fully explored with many different analyses and manipulations. Replications, with even the tiniest additional manipulations, are critical to refining our understanding of the implications of any result. Yet, with the focus on the worthiness for publication, especially novelty, rather than on scientific merit, Reviewers look on strong links with previous literature as a weakness rather than strength. Authors are incentivized to highlight the novelty of a result, often to the detriment of linking it with the previous literature or overarching theoretical frameworks. Worse still, the novelty constraint disincentives even performing incremental research or replications, as they cost just as much as running novel studies and will likely not be published in high-tier journals.
[...]
Luckily, these deficiencies are structural and do not arise because of evil Authors, Reviewer, or Editors. Rather, they are largely a symptom of the legacy system of scientific publishing, which grew from a constraint on the amount of physical space available in journals. The advent of the Internet eliminates the need for physical copies of journals and with it any real space restrictions. In fact, none of the researchers in our lab had read a physical copy of a journal in the past year that was not sent to them for free. Without the space constraint there is no need to deny publication for any but the most egregiously unscientific of papers. In fact, we argue that simply guaranteeing publication for any scientifically valid empirical manuscript attenuates all of the intangible and quantifiable costs described above. Functionally, publication is already guaranteed, it is simply accomplished through a very inefficient system. 98.2% of all papers that enter the revision loop are published at that same journal and few papers are abandoned over the course of the journal loop.

The authors also suggest a system without journals where publication is guaranteed after pre-publication peer-review and where a post-publication peer-review service provides some alert functionality for readers. To this I'd suggest to have, say, current GlamMag journal editors set up competing review services and after some time, users can evaluate which of these services has been more accurate (e.g., how often have selected/non-selected papers been cited and which review service predicted these results). Constant update of such performance would put the editors under pressure to perform accurate paper selection and users would be able to chose the service which selected ther papers relevant for them.



Kravitz, D., & Baker, C. (2011). Toward a New Model of Scientific Publishing: Discussion and a Proposal Frontiers in Computational Neuroscience, 5 DOI: 10.3389/fncom.2011.00055
Posted on Wednesday 14 December 2011 - 11:37:48 comment: 0
{TAGS}


You must be logged in to make comments on this site - please log in, or if you are not registered click here to signup
Render time: 0.0849 sec, 0.0051 of that for queries.