This is a table that records my performance in peer-reviewing duties for my favorite scientific journal. Check it out: 3 out of 3 submitted BEFORE the due date. Now that's public service.
Authors submit their manuscript online to a scientific journal. The choice of journal is a complex decision that takes into account not just the study topic and quality, but also the range of interest the results are expected to generate, and journal impact factors (the higher the reputation of the journal, the harder they are to get into -- like really snobby country clubs). The higher you shoot, prestige-wise, the less likely it is that the paper will be accepted; if it's rejected, you have to try all over again elsewhere, and that all takes time. Here some major conflicting factors are the authors' degree of confidence in their work and their need to get published more or less urgently in order to keep a roof over their head, since funding depends pretty much directly on the publication record. The catch-22 being that a publication in a "low-impact" journal will count for less in the rat-race for funding.
Aaaanyway. Once submitted, the manuscript is assigned to an editor, either automatically by field of specialty, or by named request from the author, or by decision of the chief editor, or using an Ouija board for all we care.
The editor usually does a quick "appropriateness" evaluation of the manuscript topic to check that it's at least in the ballpark area of the journal's range of topics (not to censor anyone's particular interests, but a paper on tree frog phylogeny for publication in the widely acclaimed* "Microgravity Science and Technology", really?), then sends it out to up to three reviewers for critical evaluation ["date reviewer invited"].
* This is a joke, it's a totally obscure journal as far as life scientists are concerned, and it's kind of funny because I co-authored a tiny paper in it a while back that doesn't even show up in PubMed, that's how obscure it is. The paper had nothing to do with tree frogs, though; that was just a random example. As far as I know no one has yet tried to publish tree frog phylogeny stuff in MST (which are the initials for STD in French, by the way) but what do I know, it's a crazy crazy world and anything could happen.
The reviewers are full-time working research scientists, chosen by the editor on the basis of their knowledge of who's who in the field and of the authors' suggestions (yes, you get to suggest who should review your own paper, and veto who shouldn't -- but it's a tricky game for reasons I'll go into some other time), who will perform this time-consuming task for ZERO MONEY (entirely for free, yes; so considering that authors pay to be published and readers pay to read, guess who's making money in this scheme, I'll give you a hint, it begins with "pub" and ends with "sher") and ZERO RECOGNITION (reviews are one-way blind; reviewers know whose work they are evaluating, but authors don't know who reviewed their work; the rules prevent disclosure or questioning on either side). There's a lot of whining (though not by me, obviously) about these various aspects of the peer-review system.
If an invited reviewer accepts the assignment ["date reviewer agreed"] they have two weeks to review the paper ["date review due"]. My acquaintance with a chronically exasperated editor has taught me that reviewers tend to view this due date as a friendly suggestion at most. I'm told it is a very difficult task indeed to compel already overworked people to stick to a deadline when they're doing hard work for you for free. The phrase "herding cats" comes to mind. My acquaintance with a chronically exasperated editor has also taught me to religiously respect the deadline, as illustrated in the table above. *Yeah, I get a gold star*
Reviewing involves going through the manuscript in excruciating detail, evaluating everything from technical correctness to language, along with quality of illustrations, relevance to the field, originality, study design, color of the first author's shoelaces -- wait, no, that one's not right. Everything else is fair game though. Depending on the publisher, the results can get collated in different formats, but the bottomline is always the same: the reviewer is asked to choose a recommendation from the following list.
- Accept immediately [super rare on first submission! occasion of much rejoicing! I have experienced this! I am a shameless braggart!]
- Accept pending minor revisions [= a couple hours' work should do the trick, clarifying a few points, cleaning up spelling, italics and maybe adding a couple of references to papers that might all have been co-authored by one of the reviewers, what, didja think you were being subtle, mister?]
- Major revisions necessary [ahh, see, that can mean anything from clarifying a few points, rewriting a couple of paragraphs and maybe touching up a figure to having to perform the dreaded additional experiment required by the Evil Reviewer From Hell Who Has No Idea How Hard That Is Going To Be Why'd You Think We Didn't Do It In The First Place And Holy Shit We Have Just One Month To Get It Done???]
- Reject outright [ouch -- liberal application of beer is recommended]
Once all reviewers have sent their reports back ["date review submitted"], the editor has the unenviable job of comparing the reviews and coming to a final decision. Sometimes this requires sending the paper out to an additional reviewer (especially if the paper was originally only sent out to two, which happens quite often in times of reviewer-drought) if the reports are too widely divergent. For example if one of them is all "This is the most fantastic world-shakingly awesome paper ever, you should publish it immediately otherwise they'll take it to "Nature" or "Science" or "Microgravity Science and Technology"!!!" and the other is more "Worst piece of putrid decomposing garbage I've ever had the misfortune of having slither across my desk, the authors should be required to commit seppuku on the spot and probably will want to anyway once they read my vitriolic raving rivulets of venom aka reviewer comments."
The editor's decision is eventually sent back to the main authors, who have been nervously scrutinizing their inbox since submitting the manuscript, first every week, then every day by the end of the first month, then every hour by the end of the second, and don't let's consider what state they're in when it takes longer (been there, it's not pretty). And of course, double-checking their account on the publisher's website in case there was a problem with the email.
When it finally comes, what the authors get is an email with the final decision lovingly wrapped up in pre-written platitudes. The third line is usually the one where the key words will be found: the extremes, "I am sorry to inform you" or "I am pleased to inform you"; or the wishy-washy it-could-go-both-ways "I invite you to resubmit with revisions".
The gory details of the reviewers' reports, whether one wants them for the ensuing session of CPR, open-heart surgery or autopsy, will usually be found in an attached file. Not their whole reports though; just the section specifically destined for the authors. The specific criteria-grading and confidential comments to the editor (the most entertaining section in bad reviews by well-behaved people, I dare say -- inconsiderate bastards, on the other hand, will just let rip in the author-visible comments, and never mind what poor grad student's ego gets crushed in the process... academic publishing; it's a bloodsport) remain just that: confidential.
I just finished a review that took a lot out of me, so I have some fresh thoughts about what to say and how to say it in the comments to the authors (hint: I believe in being civil and *gasp* helpful). Next post. Probably, unless I get another chocolate-related inspiration. Or a furry little guy makes a comeback (there's definitely something along those lines in store). Stay tuned.