linking back to brembs.net






My lab:
lab.png
Much has been written about the Impact Factor, a bibliographic measure originally developed in the 1960s by Eugene Garfield as a tool to rank scholarly journals according to their mean citation rate and now published every year by Thomson Reuters. In the proliferating journal landscape of the 1960s, the idea was that such a tool would help librarians and scientists to figure out which journals gather most attention.
Since then, the number of scientists has increased manifold. Concommitantly, the scientific disciplines have diverged such that, for instance, a developmental neurobiologist may not be able to understand a publication from a behavioral neurobiologist any more and vice versa. Let alone a botanist understanding an animal physiologist or the other way around. However, research funding and positions are still handed out the old fashioned way: by evaluating the researchers and their projects on the basis of scientific merit. How is this possible if you neither know the scientist nor understand his/her projects? Worse still, this old system is extremely prone to "old boys networks", which tends to hamper the development gender equality in the sciences, an issue which already has cost university presidents their jobs.
To sum it up: More people than you could possibly know, more fields than you could ever understand and you're not even allowed to stick to the people and topics you know for fear of being accused of nepotism or chauvinism. All that in a group of people who make a living off of quantifying things. Obviously, you'd expect these people to come up with a clever way of solving their problem!
Au contraire my friend, scientists are only overworked humans, too, and wouldn't dream of wasting their time on actually solving such a petty problem when they can pick an existing method that gives the impression of solving it (and otherwise go about their business as usual). The idea is so simple, it's like a free lunch and eating it too: we already have the journals all nicely ranked and people publishing in them. So why not just simply look at a person's (or project's) publications and see how often he/she/it landed publications in the high-ranking journals? That's objective, I don't even need to read the titles of the publications and of course it's completely gender blind. What could possibly be wrong with that?
Oh boy, where to start? By now, at least the two readers of this obscure blog know about the top three reasons why the impact factor is flawed:
  1. It's negotiable (check out this screenshot suggesting that Thomson's database is set up for manipulation: it allows for two records for the number of articles published by a single journal in the same year)
  2. It's not reproducible
  3. It's the wrong measure (mathematically)
But even if these flaws were all fixed, the Impact Factor is of course entirely unsuited for ranking anything other than journals on principle grounds. Because the distribution of citations to the articles in a journal is so skewed, the actual correlation of the citations any individual article gathers with the Impact Factor of the journal it was published in, is very weak (click on images for larger versions in new window, they are pretty self-explanatory):

bmj_dist1_small.jpg bmj_dist2_small.jpg bmj_corr_small.jpg

Nobody in their right mind would ever dream of using the average weight of someone's family to guess that person's actual weight (and in contrast to citations, weight even is normally distributed!), but that's precisely what scientists are doing. The mindset of the scientific community is really mind-boggling: when the inadequacy of this procedure is pointed out, rather than developing a device to measure that person's actual weight, many scientists proclaim that the average is the best there is, despite it's flaws, and insist on using it:
I agree with all that has been stated about the negative aspects of the IF. However, the REALITY is that the uninitiated, such as a Committee judging tenure, or a Committee deciding on a new Chair appointment, absolutely require Impact Factors for each article.
[...]
Therefore there is no use in arguing against the importance or lack of same of the IF.....lets just get it.
[...]
Scientists (sad but true) belong to the most conservative professionals in general. Despite anything that is counting against the IF, this will remain the measure for 99% of us for time to come.
[...]
Authors will still care about it, although they will officially say that IF's are ridiculous. But scientists are hypocrites (like most people) and they will continue to send their papers to high-IF journals, whether we like it or not.
[...]
I say this with some sadness, because I myself do not care much about IF:s, but I know a terrible lot who does and I have given up all my attempts to discuss this with people, since it seems hopeless to argue about. That's human nature, I guess.
[...]
People are still quite concerned about IF:s, although they admit that they are misleading, but as long as the research finance system favours authors who publish articles in high-IF journals, they will continue to try get published there, whether we like it or not.
[...]
I couldn't agree more with all that is being said about IFs. Unfortunately, the reality for most academics is that all kinds of evaluation committees use IFs to evaluate a researcher's output.
[...]
I know how flawed IF is, but we just cannot be blind to the reality.
[...]
The reasons you enumerate against the IF system are of course valid. However, IFs are still the most used way of evaluating a researcher's career and value. Even if we find this ridiculous, it's just the way it is.
[...]
we get MONEY for each and every impact factor point. MONEY! And all of us know how important money is for doing research. Thus, independent of all the other pros and cons on impact factors, it is a SURVIVAL factor. [...] So, no matter what the scientific community thinks - politics/politicians love IFs.
These are some of the arguments scientists bring up when the use of the Impact Factor is criticized. I wonder what these people would say to a student who tries to defend his/her flawed measurements with such arguments? "Sorry, professor, but my dog ate the scales so I had to use the published population average!" Anyway, this all means that in today's day and age, it matters more for a scientific career where someone has published, than what that person published.
So the excuses for why something so bad is still around are the same lame ones as everywhere else: "that's how it is, that's how it always was, how should we change it?" At least the question at the end is starting to be addressed. There are now different ways to rank scholarly journals, for instance by SCImago. PloS One is also starting to implement article-level metrics. CiteRank can be used to study citation networks, in a similar way to Google evaluating link networks. In a comment to one of my posts on the topic, Pedro Beltrao wrote:
A year or so ago I had a look at the correlation of number of types an article is bookmarked and the number of citations as well as the number of times it is mentioned in blog posts and its citations. Both "social" measures relate positively with number of citations so they can be used as metric.
Now there has been an effort to study how 39 measures of scientific impact correlate with each other in a principle component analysis. Their conclusion:
Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution.

Formally and informally, more and more people are now starting to discuss a replacement for Impact Factors. Points in case are the REF (research excellence framework) in the UK which is bound to go "beyond the Impact Factor" and this recent blog post "On Journal Impact Factors", by Stanford Assistant Professor of Anthropology James Holland Jones. Of course, one of the prerequisites for an efficient and fair replacement are unique contributor IDs. So replacement(s) are coming up, one reason less to even look at Impact Factors.
Posted on Friday 20 February 2009 - 15:38:36 comment: 0
{TAGS}


You must be logged in to make comments on this site - please log in, or if you are not registered click here to signup
Render time: 0.1650 sec, 0.0058 of that for queries.