In my health and science journalism class at Mount Holyoke yesterday, we were talking about all the questions journalists need to ask in deciding whether and how to report on a new medical finding. Two of the key questions that emerged were: 1. did the results come from a randomized clinical trial -- the gold standard of medical research -- or was it an observational study that examines people over time but has no control sample? And 2. did the study actually show cause and effect or just an association?
As it happened, the day's news furnished me with the perfect case example to illustrate the importance of these two questions. A page-one story in The Boston Globe, headlined "Aspirin may combat cancer, study suggests," trumpeted the results of a study published in the Journal of Clinical Oncology. The study in question was not a randomized clinical trial, i.e., it did not randomly compare what happened to one group who was taking the active drug (in this case, aspirin) with a control group who was taking a dummy pill or placebo. It was an observational study of more than 4,000 nurses (in the Nurses Health Study) who were diagnosed with breast cancer between 1976 and 2002, and it compared what happened with those nurses who regularly took aspirin vs those who did not. What the study concluded was that aspirin was associated with a decreased risk of death from breast cancer.
Associated is the key word here. The study did not find a cause and effect relationship, i.e. that the use of aspirin played a central role in reducing the risk of death from breast cancer. It only found an association between those two events (aspirin use and decreased risk of death). As Gary Taubes so eloquently points out in his New York Times magazine article, there can be a myriad of other reasons explaining such an association. There's something, for example, called the healthy volunteer effect. The nurses who took the aspirin in this particular study could simply be healthier and more concerned with staying healthy than the nurses who didn't, which might explain why they had a lower risk of dying from breast cancer. Until a randomized clinical trial is done, we won't know for sure whether it was the aspirin that kept more of the nurses alive or something other yet unidentified reason.
As Taubes notes, long-term prospective studies like the Nurses Health Study were among the first to show what looked to be an association between hormone replacement therapy and a reduced risk of heart disease and cancer (an association, by the way, heavily promoted by Wyeth and other companies that sold such replacement therapies). And we all know where that led us: to the routine prescribing of estrogen/progestin pills for millions of menopausal and postmenopausal women, a practice that significantly increased the risk of breast cancer for many of these women. This cause and effect and the discovery that hormone replacement therapy was not even protective of heart disease was only discovered years later when the federal government finally got around to funding a randomized clinical trial. In the meantime, thousands of women taking replacement therapies developed breast cancer and many died as a result.
Now I'm not saying that taking aspirin could produce a similarly devastating effect. Not at all. I take aspirin regularly myself to ward off bad headaches and I always keep a bottle handy. What I am saying is that the media needs to do a better job of explaining to their readers the difference between studies that find an association and studies that find an actual cause and effect. The Boston Globe article referenced above did not do a very good job of parsing this important difference and as a result, did a serious disservice to its readers.
Subscribe to:
Post Comments (Atom)
1 comment:
Yes, I see bad reporting of scientific news or issues all the time in the media. Frankly, sometimes I feel that it's somewhat inaccurate more times than not! =)
Post a Comment