linking back to brembs.net






My lab:
lab.png

Today, I came across two very different but equally interesting and telling things to read. One was a letter from our funding agency ( DFG) explaining why our main grant wasn't renewed and one was this article on the evaluation of science. The reasons for the grant rejection were quite easy: both reviewers complained that the results from the one project in the grant which we didn't want to renew was not described in the renewal application. Wouldn't have occurred to me that this should be in there. The other reason both reviewers raised was that we hadn't presented sufficient publications or publication-ready data, despite the fact that one crucial machine was broken for one out of the 2.5 years of the preceding grant period.

I'm not sure what to make of the first comment, but the second comment made me think that if I ever publish more than 0.5-1 papers a year out of one project, the questions I'm asking have become too easy to answer. Clearly, putting out a lot of papers is not a sign of better science. In fact, it seems more like an indication of the opposite to me.

The article above containing the eloquent and passionate plea to change the way we do science joins a long list of similar articles that are being written over the years in virtually every scientific journal and yet, the way we evaluate science has become worse and worse for the last two decades. Obviously, appealing to honor or professional standards is not an effective way to actually bring about change, at least not in the scientific community. Maybe what is needed is hard, solid evidence? The kind of evidence that convinces scientists in their everyday jobs. This idea prompted Marcus Munafó and me to write a review paper on the empirical evidence about the relation of journal rank to various measures of scientific impact or quality. We have recently published a draft of this manuscriptwhich we are currently revising. Our review arrives at the following four conclusions:
  1. Journal rank is a weak to moderate predictor of scientific impact;
  2. Journal rank is a moderate to strong predictor of both intentional and unintentional scientific unreliability;
  3. Journal rank is expensive, delays science and frustrates researchers;
  4. Journal rank as established by Thomson Reuters' Impact Factor violates even the most basic scientific standards, but predicts subjective judgments of journal quality.
Thus, the data suggest that our subjective impression of journal rank reflecting some measure of scientific quality or impact are a figment of our imagination that cannot be substantiated by the scientific method.

Looking at the literature, it becomes clear that the only reasonable way to decide about the value of a scientific project, paper or scientist is to understand it. For instance, you can't take the number of papers of a person and conclude that they make difficult problems look easy because they've published many papers - the problems may, in fact, have been easy. These sorts of reflections hold for any kind of metric. However, the scientific enterprise has become so large and diverse that it is practically impossible to understand all the problems, scientists and proposals one has to evaluate - both in terms of number and in terms of diversity. Is the consequential solution then to shrink science - maybe a particularly attractive option given current international budget crises? While this would eventually solve the problem of having too many scientific entities to evaluate, it would not solve the problem of diversity: metrics would still be required to be able to at least get a rough understanding of the relative merit of work that is just to far from one's own area of expertise to ever be able to understand at sufficient detail.

Thus, I think the rational solution is to begrudgingly accept that metrics will be required for scientific evaluation in the future. This requirement entails that we all become thoroughly familiar with their use and misuse, with the incentives they provide and that we use them prudently and with scientific rigor. Anything else would not only be irresponsible, it would also be short-sighted and eventually self-defeating.
Posted on Friday 31 August 2012 - 16:56:45 comment: 0
{TAGS}

Render time: 0.0751 sec, 0.0049 of that for queries.