I spent the past two days in Cologne on a site visit for a a large collaborative research center. I sat on a panel of 10 reviewers who reviewed a total of 18 grant proposals. The DFG asked us to not only judge the merit of the proposals themselves, but also the track records of all the applicants/PIs.
Perhaps not surprisingly, most of the reviewers, when making a case for a particular track record, regularly referenced "articles in high impact journals" as a criterion for the quality of the applicant, in some cases exclusively. Conversely, when questioning the quality of an applicant, even distinguished scholars, arguments popped up such as: "published a paper in Nature in 2005 and then nothing even close ot Nature since". Consequently, the minutes of the site visit, are chock-full of references to papers in hi-IF journals. It became very clear to me, that applicants without GlamMag publications would have a very tough time in site visits like this one.
Although I have to mention that there is a silver lining on the horizon: two of us heavily opposed the use of journal rank where it was used against the applicants, also by circulating the available data on journal rank. This lead to interested questions and discussions with the other reviewers and I think we managed to at least get a few people in positions with a lot of power thinking. It's probably wishful thinking on my part, but I'll go to bed now with the fuzzy feeling of having put perhaps one tiny nail in the coffin of journal rank.
My experiences these past two days are in stark contrast to recent statements by Michael Eisen who claims that journal rank is not the important criterion for promotion and funding as everybody thinks. Given that so far we only have anecdotes to argue with, it would really be important to have some data on this question. For which decisions in which fields and in which countries is journal rank an important criterion?
Perhaps not surprisingly, most of the reviewers, when making a case for a particular track record, regularly referenced "articles in high impact journals" as a criterion for the quality of the applicant, in some cases exclusively. Conversely, when questioning the quality of an applicant, even distinguished scholars, arguments popped up such as: "published a paper in Nature in 2005 and then nothing even close ot Nature since". Consequently, the minutes of the site visit, are chock-full of references to papers in hi-IF journals. It became very clear to me, that applicants without GlamMag publications would have a very tough time in site visits like this one.
Although I have to mention that there is a silver lining on the horizon: two of us heavily opposed the use of journal rank where it was used against the applicants, also by circulating the available data on journal rank. This lead to interested questions and discussions with the other reviewers and I think we managed to at least get a few people in positions with a lot of power thinking. It's probably wishful thinking on my part, but I'll go to bed now with the fuzzy feeling of having put perhaps one tiny nail in the coffin of journal rank.
My experiences these past two days are in stark contrast to recent statements by Michael Eisen who claims that journal rank is not the important criterion for promotion and funding as everybody thinks. Given that so far we only have anecdotes to argue with, it would really be important to have some data on this question. For which decisions in which fields and in which countries is journal rank an important criterion?
Posted on Wednesday 24 April 2013 - 22:16:35 comment: 0
{TAGS}
{TAGS}
Render time: 0.0989 sec, 0.0164 of that for queries.