"It's a plea to not get ahead of the evidence," said Dr. Christopher Cannon, a professor of medicine at Harvard Medical School in Boston and one of a group of cardiology journal editors publishing a statement this week.
There's a difference between observational studies - which simply watch subjects over a period of time - and randomized controlled clinical trials, which typically randomly assign patients to an active ingredient or placebo ("dummy pill"), often without subjects knowing what they're taking.
That difference is obvious, said Cannon, when looking at the history of hormone replacement therapy. Such compounds were used by millions of women based on preliminary promising observational evidence before the results of a key randomized study were published in 2002 showing that the compounds had risks.
And history keeps repeating, said Cannon, editor in chief of Critical Pathways in Cardiology, with the same story for what people hoped were the protective effects of vitamin E and the B vitamin folate on the heart.
When Cannon discussed drafts of the statement with colleagues, a number said, âwell, sure, we already knew that.' But Cannon points out that "every single week there are papers that overstate the findings in this way."
"It's one of those things that you need to remind yourself of all the time," he said, because there are far more observational studies than the much more expensive randomized controlled trials.
Cannon is quick to point out that observational studies have value. No one has ever set up a randomized trial to show whether smoking causes lung cancer, for example, because it would be unethical. The evidence from observational trials of the subject, combined with animal studies, leaves no doubt about cigarettes and tumors.
The trick, according to the authors of the statement, is to use the right language. "Reduced the risk by" would be appropriate for a randomized study, while "A lower risk was observed" or "there is an association" would fit the results of an observational report.
In a related paper that will appear soon in the journal Clinical Cardiology, Cannon and a colleague give a light-hearted example: If "the number of storks in Scotland and the number of babies born in Scotland both increased by 10% from 2010 to 2011 (an observational study), one could not conclude that the storks âresulted in' the increased number of babies, but instead it is more appropriate to conclude that the number of storks was âcorrelated with' or âassociated with' an increase in the number of babies born."
Needless to say, this was a fictional study.
But this is a serious matter, said Gary Schwitzer, the publisher of HealthNewsReview.org, which critiques reporting on health and medicine. The language journals use, said Schwitzer, can affect the way journalists cover these studies.
"It leaves many in the general public feeling as if they're watching a scientific and editorial ping-pong game," Schwitzer told Reuters Health. "One day, we're on this end, where coffee is protective against diabetes, and then the next week, we're at the other end of the spectrum, and coffee consumption raises the risk of stroke."
Schwitzer welcomed the journal editors' efforts. Research and journalism - two intertwined industries, he said - "had better be concerned about credibility and public perceptions of what's going on in the dissemination of information."
"We've got a public that is dying for well explained, balanced, navigable information that includes caveats and context," he said. "As somebody who looks at this every day, we've dug a very big hole that moves like this will help us start to crawl out of."
SOURCE: http://bit.ly/QJMD5J European Heart Journal (and others), online December 1, 2012.