linking back to brembs.net






My lab:
lab.png
Welcome Guest
Username:

Password:


Remember me

[ ]
 Currently Online (10)
 Extra Information
MicroBlog
You must be logged in to post comments on this site - please either log in or if you are not registered click here to signup

[23 Dec 12: 13:20]
Inbox zero! I don't even remember the last time I could say that!

[06 Aug 12: 14:21]
Phew! Done with nine 20min oral exams, three more to go. To be continued tomorrow...

[14 Oct 11: 11:45]
Just received an email from a computer science student - with an AOL email address?

[03 Jul 11: 22:26]
Google citation alerts suck: I just found out by accident I rolled over h-index of 13 and 500 citations http://blogarchive.brembs.net/citations.php

[21 May 11: 18:14]
6.15pm: Does god have Alzheimer? No #rapture in Europe...

[01 May 11: 11:31]
w00t! Just been invited to present at OKCon 2011! #OKCon2011


Networking

Subscribe to me on FriendFeed
Follow brembs on Twitter

Research papers by Björn Brembs
View Bjoern Brembs

Science Blog Directory
Random Video
SciSites
2008 is only about half way over and it's already been a devastating year for monopolist Thomson Scientific, formerly Institute for Scientific Information, ISI. First, the company's impact factor flunks a scientific test twice (ok, that was December 07, but still). In January, Thomson tries to fight back but with little impact. In March, PLoS One published a paper which presents a new way of calculating journal impact. In May, The journal Epidemiology presented several articles arguing eloquently for getting rid of the bibliometric impact factor (BIF - or brain irritability factor). Earlier this month, a special issue of the journal Ethics in Science and Environmental Politics in which authors hammered the use of the impact factor left right and center.
And now there's a new hole in the IF bucket. How mcuh longer will it be able to stay afloat? This time, the mathematicians have had a closer look at bibliometrics in general and the impact factor specifically. The International Mathematical Union (IMU) has published a report on citation statistics (PDF). The summary contains some nice information which I would like to quote here:
  • Relying on statistics is not more accurate when the statistics are improperly used. Indeed, statistics can mislead when they are misapplied or misunderstood. Much of modern bibliometrics seems to rely on experience and intuition about the interpretation and validity of citation statistics.
  • While numbers appear to be "objective", their objectivity can be illusory. The meaning of a citation can be even more subjective than peer review. Because this subjectivity is less obvious for citations, those who use citation data are less likely to understand their limitations.
  • The sole reliance on citation data provides at best an incomplete and often shallow understanding of research-an understanding that is valid only when reinforced by other judgments. Numbers are not inherently superior to sound judgments.

Using citation data to assess research ultimately means using citation‐based statistics to rank things-journals, papers, people, programs, and disciplines. The statistical tools used to rank these things are often misunderstood and misused.

  • For journals, the impact factor is most often used for ranking. This is a simple average derived from the distribution of citations for a collection of articles in the journal. The average captures only a small amount of information about that distribution, and it is a rather crude statistic. In addition, there are many confounding factors when judging journals by citations, and any comparison of journals requires caution when using impact factors. Using the impact factor alone to judge a journal is like using weight alone to judge a person's health.
  • For papers, instead of relying on the actual count of citations to compare individual papers, people frequently substitute the impact factor of the journals in which the papers appear. They believe that higher impact factors must mean higher citation counts. But this is often not the case! This is a pervasive misuse of statistics that needs to be challenged whenever and wherever it occurs.
  • For individual scientists, complete citation records can be difficult to compare. As a consequence, there have been attempts to find simple statistics that capture the full complexity of a scientist's citation record with a single number. The most notable of these is the h‐index, which seems to be gaining in popularity. But even a casual inspection of the h‐index and its variants shows that these are naïve attempts to understand complicated citation records. While they capture a small amount of information about the distribution of a scientist's citations, they lose crucial information that is essential for the assessment of research.

Is there anything I could add to that? The IF's dead, baby, the IF's dead.

Posted on Thursday 26 June 2008 - 11:02:00 comment: 0
bibliometrics   science publishing   citations   citation metrics   impact factor   citation statistics   

Render time: 0.4221 sec, 0.0531 of that for queries.