Cientistas, publiquem mais casas de tijolos do que mansões de palha: MAIS EVIDÊNCIAS!!!

terça-feira, maio 23, 2017

Publish houses of brick, not mansions of straw

Papers need to include fewer claims and more proof to make the scientific literature more reliable, warns William G. Kaelin Jr.



I worry about sloppiness in biomedical research: too many published results are true only under narrow conditions, or cannot be reproduced at all. The causes are diverse, but what I see as the biggest culprit is hardly discussed. Like the proverbial boiled frog that failed to leap from a slowly warming pot of water, biomedical researchers are stuck in a system in which the amount of data and number of claims in individual papers has gradually risen over decades. Moreover, the goal of a paper seems to have shifted from validating specific conclusions to making the broadest possible assertions. The danger is that papers are increasingly like grand mansions of straw, rather than sturdy houses of brick.

The papers leading to my 2016 Lasker prize (with Gregg Semenza and Peter Ratcliffe, for discovering how cells sense oxygen) were published more than a decade ago. Most would be considered quaint, preliminary and barely publishable today. One — showing that a tumour-suppressor protein was required for oxygen signalling — would today be criticized for failing to include a clear mechanism and animal experiments (O. Iliopoulos et al. Proc. Natl Acad. Sci. USA 93,10595–10599; 1996). Another, showing that the protein’s main target underwent an oxygen-dependent modification, was almost rejected because we hadn’t identified the enzyme responsible (M. Ivan et al. Science 292, 464–468; 2001). Fortunately, an experienced editor intervened, arguing that publication would open the search for the enzyme to other groups; such reprieves seem less common today.

What is driving today’s ‘claims inflation’? One factor is the emphasis that funding agencies place on impact and translation. Another is that technological advances have made it easier to generate data, which can be accommodated in online supplements. Both factors encourage reviewers and editors to demand extra experiments that are derivative, tangential to the main conclusion or aimed at increasing impact. And it has always taken more courage to accept a paper than to reject it with suggestions for more experiments. That can create perverse incentives by linking acceptance to a pre­ordained result. I fear that reviewers are especially inclined to ask for more when funding is tight, as it is now.

...

FREE PDF GRATIS: Nature