Hard to Tell Who Can Best Advise Journals

In the survey, experienced reviewers were asked about training they had received in peer review and about other aspects of their background. The results published in the Public Library Of Science Medicine (PloS) show there are no easily identifiable types of formal training and experience that predict reviewer performance.

306 experienced reviewers completed a survey of past training and experiences postulated to improve peer review skills. Reviewers performed 2,856 reviews of 1,484 separate manuscripts during a four-year study period, all prospectively rated on a standardized quality scale by editors.

The analysis revealed that academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training). Being on an editorial board and doing formal grant (study section) review were each predictors for only one of the two comparisons. Overall, the predictive power of all variables was weak.

The scientists conclude that there are no easily identifiable types of formal training or experience that predict reviewer performance. Skill in scientific peer review may be as ill defined and hard to impart as is ‘‘common sense.’’ Without a better understanding of those skills, it seems unlikely journals and editors will be successful in systematically improving their selection of reviewers. This makes it imperative that all but the smallest journals implement routine review ratings systems to routinely monitor the quality of their reviews – and thus the quality of the science they publish.

COMPAMED.de; Source: Public Library Of Science (PLoS)