linking back to brembs.net






My lab:
lab.png
I just came across an article in the Journal of Cell Biology (of all places) which could be the first solid nail in the coffin of that dreaded, vile impact factor.

What is the impact factor?

For the uninitiated, the impact factor is a number scientists have to buy from a monopolist company, Thomson Scientific (formerly the Institute for Scientific Information, ISI). The company claims that the number is an objective measure for the scientific impact of articles published in scholarly journals. With more than 20,000 such journals, university administrators are understandably happy to be able to use these numbers and average them, instead of actually having to read and understand any scientific publications when considering candidates for a faculty position, tenure or a promotion. In this way, every scientist gets one number and you don’t really have to work too hard to rank them according to their impact, without knowing anything else about them. With so many journals to choose from, scientists themselves are very keen on these numbers as they help deciding in which of these journals they should publish their work to have the most impact. Grant review panels are also happy that the impact factor exists, because they can use it to assess the value of previous work on this topic, without having to read it at all. Obviously, everybody is happy that Thomson Scientific is doing all this heroic work. And of course they should charge for this service, there ain’t no such thing as a free lunch, right?

How is the impact factor calculated?

The following is an example (modified from this excellent source) of how Thomson Scientific would calculate Journal X’s impact factor for 2007:
  • Citations in 2007 (only in journals indexed by Thomson Scientific) to all articles published by Journal X in 2005–2006
  • divided by the number of articles deemed to be “citable” by Thomson Scientific that were published in Journal X in 2005–2006
So the more your work is cited in other journals, the more impact it has had on the research of other scientists, which is exactly what Thomson Scientific claims. Or, in other words, the impact factor is a popularity contest, which is exactly what both administrators and scientists are both interested in: administrators want to hire the stars in their field and scientists want to become the stars in their fields (because this is what the people with the positions want) and funding agencies need to justify their spending to the politicians who of course don’t understand any science at all. With all this positive feedback, it is no wonder that the impact factor now has an almost iconic status in the scientific community as a whole and is worshipped as the be-all, end-all of objective science evaluation.

What’s wrong with the impact factor?

The concept has so many flaws, I can’t list all of them. It has come under increasing criticism in recent years. For a quick overview you only need to read three short articles: the JCB article, a PLoS Medicine editorial and an article by the Chronicle of Higher Education. I’ll only emphasize three main points:
  1. The trickle-down journal pyramid. Everybody now is pressured to publish in the few high-impact journals (most journals have an impact factor below 1, Nature and Science for example are around 30). This leads to the editors in these journals having to reject most of the submitted manuscripts before real scientists have even had a look at them. And then you just submit your manuscript to the next "lower" journal and so on. Once the editor likes your manuscript, your chances of publication there have gone from below 10% to over 60% (in the case of Nature). In other words, you write the manuscript for the editor, not for other scientists (and I know I have published in these journals and did not particularly like the experience). For most papers, only once you come down to the "low-impact journals" does the reguler peer-review start. One could say that if your science is simple and sexy enough for a journal editor to become interested and understand the idea behind it, you have a decent chance of publishing it in one of the high-impact "vanity" journals. Conversely, journal editors get all the incentives to find ways to increase the impact factor of their journals, which leads me to
  2. The denominator of the impact factor equation is negotiable with the monopoly company. New journals such as PLoS Medicine have reported their negotiation skills to account for an impact factor spread from less than 3 to 11. The Cell-Press journal Current Biology is reported to have had an impact factor of 7.00 in 2002 and 11.91 in 2003. The denominator somehow dropped from 1032 in 2002 to 634 in 2003, even though the overall number of articles published in the journal increased. I wonder what sorts of negotiations were necessary for this improvement. Nobody knows, which leads me to
  3. Until this paper now in JCB, nobody was able to verify the calculations made by Thomson Scientific, it was a completely opaque process. As with voting machines in politics, if a community is crucially dependent on a given process, this process better be of outmost transparency or the trust in this process will break down rapidly.
What's new in this article?
The new JCB article (actually an editorial) appeared three days ago and is a giant leap ahead in the process of reforming the way science is evaluated. The authors have bought subsets of Thomson Scientific databases and tested them for accuracy. It turned out that the numbers published by Thomson Scientific deciding grants, positions and therefore livelihoods cannot be backed up by the data. When alerted to this discrepancy, Thomson Scientific responded by sending a new database which was supposed to be the one from which the commercially available, published figures had been calculated. Even this database did not yield the published impact factors. The authors (all editors of journals by Rockefeller University Press) called a monopolist’s bluff and won. This is a huge step towards breaking the monopoly which directly harms a lot of people’s lives as compared to only “stifling innovation” as our most beloved monopolist, Microsoft, has been convicted for. As the authors put it:
“If an author is unable to produce original data to verify a figure in one of our papers, we revoke the acceptance of the paper. We hope this account will convince some scientists and funding organizations to revoke their acceptance of impact factors as an accurate representation of the quality —or impact— of a paper published in a given journal. Just as scientists would not accept the findings in a scientific paper without seeing the primary data, so should they not rely on Thomson Scientific's impact factor, which is based on hidden data.”
At the end, a word of caution: The agreement with Thomson Scientific prevented the authors from releasing their data for public scrutiny. Hence, there also was no big point in publishing the methods by which they assessed the validity of Thomson’s published impact factors. What is required now is an independent audit of the data and a transparent, public analysis of the hidden data and processes leading to the published impact factors. If Thomson Scientific fails to provide such an open account of their practices, they have no business in providing any services for the scientific community.

I've found four other blogs covering the JCB article (The Medium is the Message, The Krafty Librarian, Open Access News and Open Acces Archevangelism).
Posted on Thursday 20 December 2007 - 10:21:58 comment: 0
{TAGS}

Render time: 0.1002 sec, 0.0042 of that for queries.