linking back to brembs.net






My lab:
lab.png
The recent kerfuffle caused by Butler's article in Nature seems to have gotten quite a number of stones rolling. What I thought would be a rather slow process seems to be speeding up considerably because of Nature's rather unclever attack on PLoS. Cameron Neylon's blog post "What I missed on my holiday or why I like refereeing for PLoS One" points out what the real danger of PLoS One is for basically all traditional journals:
From an author’s perspective PLoS ONE cuts out the crap in getting papers published. The traditional approach (send to Nature/Science/Cell, get rejected, send to Nature/Science/Cell baby journal, get rejected, send to top tier specific journal, get rejected, end up eventually going to a journal that no-one subscribes to) takes time and effort and by the time you win someone else has usually published it anyway. It also costs the authors money in staff time to re-format, rejig, appease referees, re-jig again to appease a different set of referees. I haven’t done the sums but worst case scenario this could probably cost as much as a PLoS ONE publication charge.  Save time, save money, still get indexed in PubMed. It starts to sound good, especially for all that material that you are not quite sure where to pitch.
[...]
To me the truly radical thing about PLoS ONE is that is has redefined the nature of peer review and that people have bought into this model. The idea of dropping any assessment of ‘importance’ as a criterion for publication had very serious and very real risks for PLoS. It was entirely possible that the costs wouldn’t be usefully reduced. It was more than possible that authors simply wouldn’t submit to such a journal. PLoS ONE has successfully used a difference in its peer review process as the core of its appeal to its customers. The top tier journals have effectively done this for years at one end of the market. The success of PLoS ONE shows that it can be done in other market segments. What is more it suggests it can be done across  existing market segments. That radical shift in the way scientific publishing works that we keep talking about? It’s starting to happen.
Today's system of scientific journals started as a way to effectively use a scarce resource, printed paper. Soon thereafter, the publishers realized there were big bucks to be made and increased the number of journals to today's approx. 24,000. Today, there is no technical reason any more why you couldn't have all the 2.5 million papers science puts out every year in a single database. It doesn't take an Einstein to realize that PLoS One is currently the only contender in the race for who will provide this database. For all the involved, it is equally clear what the many advantages of such a database would be. Consequently, traditional publishers are rightfully concerned that their customerbase is slowly dissappearing.
I'm now not alone anymore in believeing that we are seeing the beginning of the end of traditional journals. The acceptance of PLoS One is a quantitative marker of this development and the positive reactions I get from virtually everyone involved (even scientific editors at traditional journals) underscores the numbers.
Precurser to this publishing reform was access reform: scientific papers are the result of publicly funded research and should be publicly accessible. This reform appears now to be well underway and will probably conclude in 2-3 years. Both reform movements have their base in the more general open science movement. The goal of this reform movement is to have full public access not only to the published papers, but also to the raw data, ideas and reagents for sharing among scientists. There are still plenty of problems which have to be worked out before open science can become a reality, if it is even feasible. One of the more easy to solve problems (one that is shared with publishing reform) is that of how to attribute credit. If we all publish in the same database and share ideas online, how can two scientists competing for the same position or grant be assessed objectively? Cameron Neylon just returned from a conference on open science and writes in his summary:
I am sceptical about the value of ‘microcredit’ systems where a person’s diverse and perhaps diffuse contributions are aggregated together to come up with some sort of ‘contribution’ value, a number by which job candidates can be compared. Philosophically I think it’s a great idea, but in practice I can see this turning into multiple different calculations, each of which can be gamed. We already have citation counts, H-factors, publication number, integrated impact factor as ways of measuring and comparing one type of output. What will happen when there are ten or 50 different types of output being aggregated? Especially as no-one will agree on how to weight them. What I do believe is that those of us who mentor staff, or who make hiring decisions should encourage people to describe these contributions, to include them in their CVs. If we value them, then they will value them. We don’t need to compare the number of my blog posts to someone else’s – but we can ask which is the most influential – we can compare, if subjectively, the importance of a set of papers to a set of blog posts. But the bottom line is that we should actively value these contributions – let’s start asking the questions ‘Why don’t you write online? Why don’t you make your data available? Where are your protocols described? Where is your software, your workflows?’
I would disagree here and argue that a multivariate portfolio is exactly what is required. Different universities/employers will focus on different aspects of a researcher and value some of his/her contributions more than others. I don't think there can be too many measures to capture the complexity of scientific output. I'd like to see an aggregating service, maybe based on services like  OpenID, where a flexible portfolio can be organized such that employers can easily search for the traits they are looking for and find or compare the people who maximize their efforts on these traits.
The analogous problem to comparing researchers is that of comparing papers. I have already written about this problem and I think it is easy to solve. I think most researchers would gladly pay for a service which has a track record of picking the most interesting, groundbreaking and well-done papers from the 2.5 million every year. Today's professional editors would be a great pool from which such services could recruit employees.
blush.png Like dino-oil, there's still some use in long-dead structures. grin.png
Posted on Saturday 19 July 2008 - 19:13:40 comment: 0
{TAGS}


You must be logged in to make comments on this site - please log in, or if you are not registered click here to signup
Render time: 0.0988 sec, 0.0069 of that for queries.