15 November, 2006

Real peer review

Peer review is the sacred core of the self-correcting machinery of science. Before it can be published, a new paper must pass muster with qualified experts in the field it covers, ensuring that dodgy results and poorly supported conclusions do not make it out into the literature to impede the grand progress of the scientific enterprise.

Well, that’s the theory – but what’s it like in practice?

A source of constant amusement (and some trepidation) to me is the way that some of the most vital jobs in your academic career seem to be just dropped into your lap: you are asked, told, expected to do them and do them well, but no-one thinks to provide much in the way of guidance, or even check that you’re vaguely competent, before gladly dumping the workload in your lap*. That’s certainly been the case for lecturing (for instance, no-one on the teaching staff has ever bothered to sit in on one of my lectures to see if they are actually coherent), and my introduction to the ‘review’ bit of the peer review process has been suspiciously similar. My two helpings of review fodder to date, the second of which I polished off yesterday, have both come to me in the same way - the editors sent a paper to my supervisor/boss and he fobbed them off in my direction (so that’s exactly like the teaching then...). Whether their eagerness to do so was confidence in his opinion (scary) or desperation (scarier) is hard to tell.

That’s the first dose of reality – willingness to review other papers is a grudging quid pro pro for other people reviewing yours, and the abstract realisation that we have to do it to make the system work isn’t much of an incentive when you have other, more concrete, demands on your time. This is especially true for the people most frequently solicited for reviews - prominent scientists at the forefront of their field, who want to spend their time staying at the forefront rather than carefully checking the work of current or potential competitors. Hence, with one important exception, one of three things will happen:

  1. They review it by skimming the paper and banging off a few line which quote your abstract back at you, and say its all fine. A nice ego boost, but hardly constructive.
  2. They stick it in their in-tray and leave it there for 5 months**.
  3. They offload the task onto a lesser minion (sometimes promptly, sometimes only after nagging by the journal editor, if they are still too busy to bang out the cheap and cheerful review described in (1)).

The second dose of reality comes when considering the exception, which is when the submitted paper treads on the toes of one of the reviewer’s pet theories, or (worse) pre-empts something that he is working on himself. Whilst such situations can motivate a prompt review, they can also lead to the concept of objective assessment getting a little…strained, meaning that such a review is going to have a strong focus on the negatives. Common examples: a minor problem is talked up as a fundamental flaw, or an unrealistic amount of extra analysis is demanded (a hostile reviewer for one of my papers suggested that to properly establish one of my ‘assumptions’ – in reality the major conclusion to the paper - I would need to undertake elemental analysis of every Miocene outcrop in New Zealand). And as Lab Lemming astutely reminds us in the comments, even where your reviewer is managing to quell the urge to trash your work, you're still liable to an unhealthy dose of cite-napping: it seems that somehow you've missed the key contribution of every paper they've written since 1977, and it's their duty to point this out to you.

In the former case this can be annoying, but is to a certain extent healthy – who better to test your ideas to destruction than someone not inclined to believe them? The problem is that papers within a given subfield tend to always get sent to the same people, meaning that if they are hostile to certain ideas or interpretations, it can be very difficult to get them published in a top journal. However, even if publication is slowed by such antics, good science will generally out in the end. The latter case is more difficult, because it is not altogether unheard of for someone to hold up someone’s paper while manoeuvring to scoop it, especially in competitive fields like paleoclimatology***. And then, basically, you’re shafted.

All things considered then, since anyone more than a year into their PhD should have had a healthy amount of practice at critically assessing the scientific merit of a paper (I’ve been scribbling sarcastic comments in the margins since my undergrad days), delegation is perhaps the best outcome; indeed, for lesser lights such as myself the novelty value of actually being asked for my opinion probably motivates us to be more conscientious, compensating for our relative lack of experience (I may generalise too much here, given that my pedantic nature forces me in that direction anyway). Now, however, the problem is reversed, because the reviewer finds himself in the position of passing judgement on the work of people who are far more important than he is. A glorious victory for egalitarianism, to be sure, but should you identify any serious flaws you have to bear in mind that in the small world of academia, just because the review process is supposedly anonymous doesn’t mean they won’t find out it was you who trashed their precious paper. You have to choose your words carefully, and even then some people may not be too happy with you, should they find out who you are.

Hopefully, it won’t come as too much of a surprise that scientists are human beings, and thus the peer review process is witness to as much sloth, incompetence and back-stabbing as any other human activity. The question is, how much does it matter? The trumping issue aside, not as much as some people think. The ‘official’ peer review process is really just a preliminary; the real peer review occurs after publication, when everyone gets to see your opus, read it, write sarcastic comments in the margin, and (if it’s any good) use and cite it in their own work. This is something that ID ‘theorists’ and other pseudoscientists constantly misunderstand: somehow getting a paper ‘peer reviewed’ and published does not automatically convert bad science to good. It might make scientists pay some attention to your ideas, but it doesn’t require them to agree that they are valid; and if they don’t think they are, they will say so. Good ideas are good ideas and bad ideas are bad ideas, whether they’re ‘peer reviewed’ or not, and will gain traction – or not - accordingly. However scientific publishing adapts to the internet age, whether with “open reviews” or some other system, that will always remain the real test

*This is my experience, anyway – I’d be interested to whether this is actually fairly common, or my department’s unique approach to training is shining through.
** Yes, I’m talking to you, the JGR reviewers who have been sitting on a couple of my papers since June…
*** Which might explain why so many of the paleoclimatologists I’ve met are constantly in a bad mood.

4 comments:

C W Magee said...

My first job outta college involved reviewing papers for a field-leading academic who couldn't be bothered wading through himself. But sleazebag reviewing isn't limited to papers. I know a PhD candidate who had his thesis sent to a professor for adjudication. This professor, whom I do not have the guts to name, replicated all the analyses on another sample set and published the work as original, before grading and returning the thesis.

Also, it is worth linking to Dr. Shellie's post on cite-napping.

Chris R said...

Ha, yes I missed that one - I can think of a few major offenders in that area. Duly amended.

I'm currently trying to control the hopefully irrational fear that my long-vanished JGR papers are not suddenly going to find themselves scooped.

Anonymous said...

Kudos to Lab Lemming for cite-napping your post on cite-napping. Thanks for the publicity!

Dr. Lemming said...

More discussion on reviewers here:
http://lablemminglounge.blogspot.com/2007/04/what-to-do-about-reviews.html