linking back to brembs.net






My lab:
lab.png
There has been an interesting discussion going on at the message board for editors of PLoS One. I've posted a comment to this discussion I thought may be interesting to others as well. Here it is (slightly edited):

I may be thinking a little too far ahead here, but so is the entire PLoS One endeavor. I see at least two major reasons why any sort of journal impact is meaningless:
  1. It's absolutely irrational to assume that where something is published can say anything valid about the quality of a particular papers content.
  2. As most journals, in the long run, are most likely going to die out in their present form anyway, why bother comparing One to others (other than maybe during a transitional period)?
IMHO, individual paper assessment is the obvious way to steam ahead. No publication is providing a comprehensive paper assessment. Some are showing citations, others list their most accessed papers, but none of them exhaust the full technial potential. (un)Fortunately, there is no substitute for reading a paper, when you want to find out how good it is. "Quality" means something different for everybody. Hell, it means something different for me, depending on what kind of paper it is! The only thing quantitative measures can grasp is something along the ways of popularity, attention and fashion. This is not necessarily diminutive, as an attention grabbing paper mostly means something in science, not necessarily good, but very often. Therefore, I think every single one of the values Peter Binfield mentioned for inclusion in PLoS One papers are important:

  • Number of citations
    AFAIK, only the citations intrinsic to the PLoS system can be counted unambiguously. Ideally, PLoS could use this as an opportunity for developing an open citation standard, such that authors can collect a standard set of citations from multiple sources using a common protocol. This would be transparent and public pressure would force others (Google, Scopus, etc.) to adopt the standard, making it as transparent and complete as possible. One may then develop an interface to attach all of this to the paper itself, if one so wishes.
  • Number of downloads / views
    Access from the PLoS system can only give the lower bounds of access statistics, but cooperating sites could agree to add their request data to a common database (a solvable technical problem). Of course download doesn't mean read, but neither does citation. Nobody can get access to 'read' data anyway, so why bother? Gaming this system can be reduced by standard IP or cookie-based flood controls.
  • Amount of 'Relevant' Blog Coverage
    If people like/dislike the paper enough to leave a trackback, count the coverage. Just like 'quality', 'relevance' cannot be assessed unambiguously anyway.
  • Amount of News Coverage
    In each press release, encourage media to leave a (maybe separate from blogs) trackback, by visiting a link which is only acessible for accredited news media. This counter should include the press release itself, to how many outlets it went and how many agencies have picked it up (as far as technically feasible). That way, the press release is attached to the paper itself as well as at least some of the media coverage for at least some period of time (for those who want to check for the 'relevance' of the coverage).
  • Number of Times Bookmarked in Social Bookmarking Sites (analogous to citations)
    Great idea! Given the way these sites work, it shouldn't be to difficult to crawl them to get fairly accurate numbers. I don't think it will be very easy to game that system, unless you know a few hundred people who are willing to help you or have a few hundred email addresses to sign yourself up for.

The more variables there are to game, the more difficult it becomes. Now we have one variable (IF) and we all know who is gaming it ad nauseum. In this thread we have 5 measures, add ratings and comments and you have 7. This should be impossible to game for anyone but the hacker who can get thousands to machines on the net to just hype this one paper smile.png

All of these measures are relevant even long after publication. Some papers ignored by the media may later turn out to harbor the most important discovery of the century, while some of those tossed around everywhere turn out to be completely irreproducible. Having these measures in place, if nothing else, would allow us to quantify and study such events.

But again, no matter how many numbers you have, these measures cannot substitute for actually reading the papers! The numbers barely give you a rough idea of where a paper or a scientist can be placed with respect to others in the same field. Yet, these measures would be light-years ahead of any one-dimensional, irreproducible, obviously manipulated and corrupt measure such as the IF.
Posted on Monday 21 July 2008 - 17:55:28 comment: 0
{TAGS}


You must be logged in to make comments on this site - please log in, or if you are not registered click here to signup
Render time: 0.0935 sec, 0.0057 of that for queries.