linking back to

My lab:
ScienceOnline09I'm finally back in Berlin and have done some research so I can sum up our final session on open access publishing and Thomson Reuters' Impact Factor. Peter Binfield from PLoS One and I moderated this session together (see coverage of all sessions here). The session started with a brief summary of what the IF is and what its main problems are. I have uploaded the slides of that presentation both at Nature Precedings and at Slideshare. Both sites have mangled the slides pretty badly and it's a tough call which to embed. Nature Precedings is more science-y, so I'd prefer to embed that. But it doesn't play in my Firefox browser so I'll have to embed the equally bad Slideshare version:
EDIT: Nature Precedings fixed their bug and now the presentation looks perfect!

Anyway, maybe I shouldn't have been surprised that after that presentation, everybody just wanted to affirm that they, also, despised the IF, reaffirming each other's deeply held beliefs. I think it took me three, probably rather rude (I apologize in retrospect) interventions to get the discussion back on track towards the main question of how to get rid of it. Cameron Neylon recorded the whole session via webcam, so you can follow it here:

You can also follow this thread on Friendfeed.
The main gist was that even though many people would probably like to go back to a world without metrics, it's too late for that. Therefore, the most important step in getting rid of the IF is to have some sensible replacement. The ideas for potential replacements included article-level metrics such as those rolled ou by PLoS One next month and a Friendfeed-like aggregator collecting all relevant information about a researcher, including, of course, all citations to his publications. A critical component of this movement to replace the IF is a way to disseminate the information about this replacement as well as easy access to it, such that it becomes more convenient to just look the replacement up, rather than Thomson's antiquated IF.
Since the session, Cameron Neylon has written a very pertinent blog post to start this movement (see also on friendfeed) and there are more people coming up with similar solutions to these problems. What is required is a unique ID for each researcher (e.g. OpenID) and a service to aggregate all their information in a way specialized for scientists.
Right after the session, Ivan Oransky suggested to contact ISI-founder and IF inventor Eugene Garfield to serve as a figurehead for this movement to seriously upgrade research assessment to current web technology. I think this is a great idea and I'm open for suggestions as to how to strategically best contact him.

There was one final thought that crossed my mind when I reflected back on the session and the discussions in and around it. Expert opinion may be better than any metrics in assessing the 'quality'  (whatever that means) of a particular scientific contribution. But the only thing that can elevate a publication to eternal fame or relegate it to oblivion is time alone: experts can err and metrics can be skewed or manipulated. It took 200 years to show Newton was wrong, Einstein never believed in Quantum Physics and Woo Suk Hwang's papers remain highly cited even though they have been retracted.

UPDATE: A whole bunch of things have emerged since this post.
Posted on Wednesday 21 January 2009 - 16:22:08 comment: 0

Render time: 0.0715 sec, 0.0044 of that for queries.