Now here's a must-read for anyone interested in entering a science career. The journal "Ethics in Science and Environmental Politics" has just published a special issue called "The use and misuse of bibliometric indices in evaluating scholarly performance". The title says it all and if you think that some unknown math nerds have gotten together to publish their new bibliometric formulae, you are sorely mistaken. Big shots in the science industry such as Nature Editor-in-Chief Philip Campbell and some other people you might know join the author list. I still haven't read all of the papers, but already now I can quote some instant classics: Philip Campbell:
Our own internal research demonstrates how a high journal impact factor can be the skewed result of many citations of a few papers rather than the average level of the majority, reducing its value as an objective measure of an individual paper. [...] The majority of our papers received fewer than 20 citations. [per year] [...] the numbers quoted in calculating the impact factor are highly questionable. Try as we might, my colleagues and I cannot reconcile our own counts of citable items in Nature, several other Nature journals and indeed Science, with those used by ISI. [...] the judgement of ‘better’ is best kept independent of the impact factor. [...] for a sure assessment of an individual, there is truly no substitute for reading the papers themselves, regardless of the journal in which they appear.
William Cheung: young scientists may even develop bad writing habits (e.g. exaggerating implications of findings, over-simplifying analyses and conclusions, ignoring caveats) if an excessive desire to publish in high-impact journals skews their scientific judgement or publication ethics. [...] The use of publication counts and number of citations to assess academic performance do affect the publishing strategy of young scientists. Particularly, at the stage of being a post-doc, to compete for the limited number of junior faculty or equivalent research positions, one would try to maximize the number of publications and their impacts in a short time-horizon
Peter Lawrence: It has always been crucial for research scientists to publish their work. There have always been 3 purposes: first, to disseminate new information so that others can learn from it; second, so that other scientists may repeat the studies, or build on them with additional observations or experiments; and only third, so that the support, financial or otherwise, for the scientist can be justified to interested parties. This third reason used to be subsidiary, but no longer; publication has become the main goal because it is the scientist’s lifeline (Lawrence 2003). This enormous change in emphasis has damaged the practice of science, has transformed the motivation of researchers, changed the way results are presented and reduced the accuracy and accessibility of the scientific literature. [...] Since scientists are now assessed, not so much by the validity, interest or quality of the work itself, but by the impact factor of the journal (Steele et al. 2006), many, if not most scientists, spend too much time and effort thinking and worrying about publication strategy. [...] Politics enervates science. [...] I predict that ‘citation-fishing and citation-bartering’ will become common practice [...] scientists will claim superiority over others if they have more citations, and this will be endorsed by bean counters everywhere [...] There are other consequences of the use of numerical measures: given that meeting them rewards aggressive, acquisitive and exploitative behaviour (Lawrence 2002, Lawrence 2003, Montgomerie & Birkhead 2005), their use will select for scientists with these characteristics. [...] grant applications do not describe what you will actually do but are in reality an ingenuity and knowledge test in which honesty is little valued; they amount to an attempt to demonstrate that one knows what one is doing and can divine what the outcomes of experiments will be and assess what might be risky to reveal.
Todd & Ladle: Our paper supports Lawrence’s (2007, p. R583) view that impact factors and citations are ‘dodgy evaluation criteria’, and we strongly advise against a system that wholly relies upon them to evaluate a scientist’s contribution.
Definitely one of the most insightful articles is The Siege of Science. If all of this doesn't whet your appetite to go and read all of them, I don't know what possibly could. The total of the articles just reinforces my point of view: we need to get rid of journals. Period. All of them. One single, peer-reviewed, open-access database for primary scientific literature. Have journals publish reviews or such. A single open-access scientific database provides everyone with the most important asessment resource: the scientific papers to read and study. If any additional metrics are required, then one can sort articles according to downloads, citations, comments, ratings, editor's choice, media coverage, trackbacks, links or whatever else strikes your fancy. Of course, on every paper you can click on any author and get all their papers with all the meta-data. On top of all that, no more ISI, PubMed or any other indices to search for papers: they're all searchable in full text in one place.Posted on Tuesday 03 June 2008 - 17:29:09 comment: 0
{TAGS}
{TAGS}
You must be logged in to make comments on this site - please log in, or if you are not registered click here to signup
Render time: 0.0967 sec, 0.0066 of that for queries.