linking back to

My lab:
Welcome Guest


Remember me

[ ]
 Currently Online (29)
 Extra Information
You must be logged in to post comments on this site - please either log in or if you are not registered click here to signup

[23 Dec 12: 13:20]
Inbox zero! I don't even remember the last time I could say that!

[06 Aug 12: 14:21]
Phew! Done with nine 20min oral exams, three more to go. To be continued tomorrow...

[14 Oct 11: 11:45]
Just received an email from a computer science student - with an AOL email address?

[03 Jul 11: 22:26]
Google citation alerts suck: I just found out by accident I rolled over h-index of 13 and 500 citations

[21 May 11: 18:14]
6.15pm: Does god have Alzheimer? No #rapture in Europe...

[01 May 11: 11:31]
w00t! Just been invited to present at OKCon 2011! #OKCon2011


Subscribe to me on FriendFeed
Follow brembs on Twitter

Research papers by Björn Brembs
View Bjoern Brembs

Science Blog Directory
Random Video
ReaderMeter looks in the database of reference manager Mendeley and checks how many people have bookmarked which papers for later referencing in their scientific papers. So, for example, you can go and check out the statistics of yours truly. It's not all that impressive, compared with my citation statistics, but given the userbase of Mendeley and compared with my peers, it seems about right. This data show you how many people have probably read your papers and might be planning to cite them in a later manuscript.

ReaderMeter depends critically on the number of Mendeley users generating the data, of course. This means that every biologist user on Mendeley will be more likely to bookmark any of my papers, while engineers or social scientists will be less likely. This is how the userbase skews these statistics, just like citations cannot be compared accross disciplines. These sorts of statistics once again show how absurd it is to have different providers offering different services:

  • You have Faculty of 1000 (disclaimer: I'm a faculty member), who provide expert reviews on research publications, but their logo only shows up on evaluated papers in PubMed and not on any of the other search portals (yes, we have 4-6 of these in the sciences). To find evaluations and/or the evaluated papers, you need to subscribe to them.
  • You have PLoS One (disclaimer: I'm an Academic Editor), where every published paper can be commented on, downloads and citations are tracked and your search for papers can be filtered/sorted by some of these criteria. However, this functionality exists only on their site and no user profiles on PLoS reveal anything about the user: no number of papers published or handled as editor, no citations or downloads, nothing.
  • You have Frontiers in Neuroscience (disclaimer: I'm an Associate Editor) where some, how shall I put it, technically rather obscure process also evaluates readership and leads to the 'promotion' of papers through a journal hierarchy such that the most widely read papers end up in a very general journal eventually. None of this can be seen outside of the Frontiers website and so one can only compare papers within the frontiers system.
  • You have CiteUlike, another reference manager, where you can get Amazon-like suggestions à la "users who have also read the paper you just bookmarked, have also bookmarked this paper:" Alas, you need to visit their website in order to get these suggestions.

Most if not all of these bits and pieces of information are important to scientists and administrators. Yet, there is no way to get all these statistics from one source (or for one researcher, for that matter). Thus, all of these great efforts are, at least for now, useless, because they're either skewed by a small userbase, or locked into a subscription or plain impractical. These limitations mean that they're impossible to use for any sort of comparisons other than proofs of principle, that they're impossible to use as filter, sorting or discovery system other than for a very limited number of fields and an even more limited number of papers.

In brief, these great innovations are doomed to fail, because they'll die out before even a double digit fraction of scientists knows about them. In the end, the beta-max vs. VHS battle will play out before our eyes again: superior technology will loose against the deeper pockets. In our case, it will mean that the Thomson-Reuters and the Elseviers will come up with some form of easy to understand  "Impact Factor 2.0" with which we'll be stuck for another 50 years.

If we want to get a grip of the current pace at which science progresses, there needs to be a movement which unifies these great efforts to create standards. These standards will attract a lot of attention, because of their usefulness, and hence will lead to adoption. Right now, what we have are fragments, each of which makes us sigh: oh, if only we would have this for all publications!

Why don't we have this for all publications already? Because the widespread frustration with the current publishing system leads to ever new circles of people coming up with always new and brilliant ideas. But the opposition is splintered and the People's Front of Judea doesn't talk to the Popular Front:

So what are these organizations/people waiting for? Get your heads together and develop standards, to make all these really important technologies available to everybody. I want one place where I find, sort, discover and store the scientific literature for my daily work. I want this place to poffer the latest technology to assist me in the stupendous task of doing that with 2.5 million scholarly papers published every year. Obviously, I'm more than willing to pay for a place where I can get that. I've estimated before that a place like that will probably save me 5-10 hours of boring title-sifting every week.

Alternatively, you could just sell these goodies all to Thomson Reuters or Elsevier of course... devilmad.png
Posted on Tuesday 31 August 2010 - 06:25:17 comment: 0
ReaderMeter   mendeley   article level metrics   citations   statistics   

You must be logged in to make comments on this site - please log in, or if you are not registered click here to signup
Render time: 3.4864 sec, 0.0216 of that for queries.