linking back to brembs.net






My lab:
lab.png
The discussion at our ScienceOnline09 session "Reputation, authority and incentives. Or: How to get rid of the Impact Factor" had already hinted that people were generally very interested in a service which could eventually aggregate all the contributions of researchers and the reactions of their community (citations, comments, ratings et al.; see also Friendfeed liveblog).
In the meantime, this idea has gotten some legs. Cameron Neylon (who also recorded the session on Mogulus) wrote a detailed blogpost on how he thought OpenID could be used as a standard to establish such a service:
So what about building an OpenID service specifically for researchers? Imagine a setup screen that asks sensible questions about where you work and what field you are in. Imagine that on the second screen, having done a search through literature databases it presents you with a list of publications to check through, remove any mistakes, allow you to add any that have been missed. And then imagine that the default homepage format is similar to an academic CV.
However, what would be the incentives motivating researchers to use such a service? After all, there already is Thomson Reuter's ResearcherID and not too many people are using it. A service like this obviously needs to provide clear and obvious benefits for all involved parties: the researchers themselves, funders, publishers and librarians. Funders and publishers would benefit from a unique person identifier for every person in their database, as people's names may either be common or change. Their addresses sometimes change in rapid succession, given the semi-nomadic life-style of scientists. So one incentive could come from these two groups, they could require each researcher to provide a unique ID. In that case, one needs to have a standard in place which is appealing enough to funders and publishers that they trust it and would want to implement it. These kinds of thoughts were quickly picked up by the community and tossed around on Friendfeed and several other places (resource wiki).
This discussion is where CrossRef came in. CrossRef is a not-for-profit membership association which brought us the digital object identifier (DOI) and which currently sports 654 members. The organization has already discussed a unique researcher identifier (2 years ago!) and has confirmed their interest in the Friendfeed discussion:
the author ID problem is "much bigger than publishers". We are talking to researchers, librarians, funding agencies, etc. about what they would require from a service. We were at the CNI meeting and Cliff Lynch is on our advisory board and is aware of our project.
The backing by CrossRef may actually move this discussion from hypothetical to beta-version 'soon-ish', which is why I'm excited about it.
As one of the contributors said:
Imagine a Web where everything you did publicly was linked by the very fact that you were represented by a URL exactly like your blog post, or your photo on Flickr, or your post on Twitter, or your correction to that Wikipedia entry, or your research paper in your institutional repository for that matter…. think of the possibilities.
You could have a completely cross-referenced and fulltext searchable literature database the way getCITED was intended (but never took off, because it had to be edited by hand). Nowadays, as ResearcherID and Harzing's "Publish or Perish" show, the automatic construction of publication lists and their citations is already a reality. However, because these algorithms are not perfect and services are separate (i.e., mainly Thomson's ISI, Scopus and Google Scholar), one still has to edit such lists by hand. I think Cameron's 5 criteria for new online social services apply very well to the plan to establish a unique researcher identifier, and the potential is there for all of them to be met. Personally, I'm only waiting for a signal to get involved. In case others need a signal, I'm willing to provide it!
Now why should something like that be able replace the dreaded Impact Factor in research assessment? As is well known, because the IF is a very blunt and in most cases completely useless tool for research assessment. Research evaluation needs to be fine-grained but scalable from individual contributions, to researchers, to grants and institutions. Scientific evaluation also has to be tunable for the needs of the evaluator: is the focus more on the research per se, or its scientific or general relevance or is it teaching or reviewing or science communication or a balance of all these factors. Some of these points have been discussed in a recent PLoS Computational Biology article. Currently, research evaluation is neither scalable nor tunable. Such a service, if it existed, would allow each researcher to compose their own database of contributions and to automatically fill it. Publications and citations are only the beginning. If more metrics start to be aggregated for each publication, such as the ones PLoS One is rolling out next month, all other kinds of relevant reactions to scientific contributions can be assessed. Publishers and funders could provide feedback on the number and quality of reviews the particular researcher has provided for them. Blog posts, comments and other intellectual contributions could be tracked, database contributions would become immediately visible, teaching evaluations can be incorporated and so on. The possibilities are endless. This sort of service has to come and better yesterday.

UPDATE: The Names Project is now also aware of these most recent developments.
Posted on Friday 23 January 2009 - 17:31:45 comment: 0
{TAGS}

Render time: 0.0903 sec, 0.0044 of that for queries.