linking back to

My lab:
This question is directed towards people who think that ranking ~24,000 scientific journals according to a negotiable, irreproducible and mathematically unsound measure is a practical way of sorting the wheat from the chaff. The specific uselessness of Thomson's IF aside, if this ranking were done in the best possible way, what would be a consequential way of using it?
Right now, there are between one and two million articles being published in the ~24,000 scholarly journals. Even if you stayed close to your own field, way too many to read for a single person. If you're in a search committee for a tenure-track position, you would need to read at least one paper of the 60-300 applicants - unrealistic. Thus, for your own science and for administrational purposes, the historical way of assessing quality (i.e. reading) has become physically impossible. After someone else has already done the job of ranking journals for other purposes, it's no surprise people only look at how many papers someone has in what journals, and rank them accordingly. Or to dismiss a paper after seeing that it was published in a journal under a certain rank, but without reading it. After all, this saves the individual researcher from having to read everything, which is impossible anyway. Such an attitude is inconsequential and revealing. It reveals that people who hold this particular attitude don't really care about their research - apparently any excuse is good enough to avoid reading papers, no matter how bad an excuse it is. It's inconsequential, because the consequential way would be to stop writing papers altogether. If the editors at the journals indeed are so good at picking 'good' science, why not have them just listen to 5min presentations of the researchers, pick their favorite work and write a 500 word news-and-views-type article. This would cut down drastically on the 2 million papers and on their length, a win-win situation.

Why is nobody of those who are defending journal rank honest, bold and visionary enough to argue the consequential execution of a currently completely inadequate, dilettantish practice? If it's all too much and any, no matter how inadequate a selection is good enough, why not go all the way and stop writing papers altogether? After all, the few people who still have the time to care how the researchers did it and what exactly came out, can still send an email and ask for the details! We all know, these short, news-type articles are the only ones read in the GlamMagz anyway.

I agree with the need to filter papers, but I want to be in control of the filter. I don't want editors to control my filter and I definitely don't want a monopolist like Thomson to muck up my filter. I don't care where something is published, if it's in my direct field I need to read it, no matter how bad it is. If a paper is in my broader field, I'd apply some light filtering, such as rating, comments, downloads, author institute, social bookmarks, or some such. If the paper is in a related field, I'd like to only read reviews of recent advances. If it's in an unrelated field, but one I'm interested in nonetheless, I'd only want to see the news-and-views article, because I wouldn't understand anything else anyway. For everything else, titles, headlines or newsreports are good enough for browsing. All of this can be done after publishing and certainly doesn't require any artificial grouping by pseudo-tags (formerly called journals).
Posted on Monday 10 August 2009 - 15:37:35 comment: 0

You must be logged in to make comments on this site - please log in, or if you are not registered click here to signup
Render time: 0.0784 sec, 0.0070 of that for queries.