2008 is only about half way over and it's already been a devastating year for monopolist Thomson Scientific, formerly Institute for Scientific Information, ISI. First, the company's impact factor flunks a scientific test twice (ok, that was December 07, but still). In January, Thomson tries to fight back but with little impact. In March, PLoS One published a paper which presents a new way of calculating journal impact. In May, The journal Epidemiology presented several articles arguing eloquently for getting rid of the bibliometric impact factor (BIF - or brain irritability factor). Earlier this month, a special issue of the journal Ethics in Science and Environmental Politics in which authors hammered the use of the impact factor left right and center.
And now there's a new hole in the IF bucket. How mcuh longer will it be able to stay afloat? This time, the mathematicians have had a closer look at bibliometrics in general and the impact factor specifically. The International Mathematical Union (IMU) has published a report on citation statistics (PDF). The summary contains some nice information which I would like to quote here:
Using citation data to assess research ultimately means using citation‐based statistics to rank things-journals, papers, people, programs, and disciplines. The statistical tools used to rank these things are often misunderstood and misused.
Is there anything I could add to that? The IF's dead, baby, the IF's dead.
And now there's a new hole in the IF bucket. How mcuh longer will it be able to stay afloat? This time, the mathematicians have had a closer look at bibliometrics in general and the impact factor specifically. The International Mathematical Union (IMU) has published a report on citation statistics (PDF). The summary contains some nice information which I would like to quote here:
- Relying on statistics is not more accurate when the statistics are improperly used. Indeed, statistics can mislead when they are misapplied or misunderstood. Much of modern bibliometrics seems to rely on experience and intuition about the interpretation and validity of citation statistics.
- While numbers appear to be "objective", their objectivity can be illusory. The meaning of a citation can be even more subjective than peer review. Because this subjectivity is less obvious for citations, those who use citation data are less likely to understand their limitations.
- The sole reliance on citation data provides at best an incomplete and often shallow understanding of research-an understanding that is valid only when reinforced by other judgments. Numbers are not inherently superior to sound judgments.
Using citation data to assess research ultimately means using citation‐based statistics to rank things-journals, papers, people, programs, and disciplines. The statistical tools used to rank these things are often misunderstood and misused.
- For journals, the impact factor is most often used for ranking. This is a simple average derived from the distribution of citations for a collection of articles in the journal. The average captures only a small amount of information about that distribution, and it is a rather crude statistic. In addition, there are many confounding factors when judging journals by citations, and any comparison of journals requires caution when using impact factors. Using the impact factor alone to judge a journal is like using weight alone to judge a person's health.
- For papers, instead of relying on the actual count of citations to compare individual papers, people frequently substitute the impact factor of the journals in which the papers appear. They believe that higher impact factors must mean higher citation counts. But this is often not the case! This is a pervasive misuse of statistics that needs to be challenged whenever and wherever it occurs.
- For individual scientists, complete citation records can be difficult to compare. As a consequence, there have been attempts to find simple statistics that capture the full complexity of a scientist's citation record with a single number. The most notable of these is the h‐index, which seems to be gaining in popularity. But even a casual inspection of the h‐index and its variants shows that these are naïve attempts to understand complicated citation records. While they capture a small amount of information about the distribution of a scientist's citations, they lose crucial information that is essential for the assessment of research.
Is there anything I could add to that? The IF's dead, baby, the IF's dead.
Posted on Thursday 26 June 2008 - 11:02:00 comment: 0
{TAGS}
{TAGS}
You must be logged in to make comments on this site - please log in, or if you are not registered click here to signup
Render time: 0.0837 sec, 0.0049 of that for queries.