Steven Woloshin, MD, MS; Lisa M. Schwartz, MD, MS; Samuel L. Casella, MPH; Abigail T. Kennedy, BA; Robin J. Larson, MD, MPH
Note: Drs. Woloshin and Schwartz contributed equally to the creation of this manuscript. The order of their names is arbitrary.
Disclaimer: The views expressed herein do not necessarily represent the views of the Department of Veterans Affairs or the U.S. government.
Acknowledgment: The authors thank Renda Wiener, MD, MPH, for helpful comments on an earlier draft; the members of the Veterans Affairs Outcomes Group for their ongoing helpful feedback during all phases of the project; Deborah Kimbell, Andrew Nordhoff, and Jane D'Antonio for their feedback on the media relations interview script; and Jennifer Snide, MPH, and Katie Van Veen, BA, for technical assistance.
Grant Support: By the National Cancer Institute (grant R01 CA104721). Drs. Woloshin and Schwartz were also supported by Robert Wood Johnson Generalist Faculty Scholars Awards.
Potential Financial Conflicts of Interest:Honoraria: S. Woloshin (National Institutes of Health, for “Medicine in the Media”), L.M. Schwartz (National Institutes of Health, for “Medicine in the Media”).
Reproducible Research Statement:Study protocol: Available from Dr. Schwartz (e-mail, firstname.lastname@example.org). Statistical code and data set: Not available.
Requests for Single Reprints: Lisa M. Schwartz, MD, MS, Veterans Affairs Outcomes Group (111B), Department of Veterans Affairs, Veterans Affairs Medical Center, White River Junction, VT 05009; e-mail, email@example.com.
Current Author Addresses: Drs. Woloshin, Schwartz, and Larson: Veterans Affairs Outcomes Group (111B), Department of Veterans Affairs, Veterans Affairs Medical Center, White River Junction, VT 05009.
Ms. Kennedy: Dartmouth Medical School, 1 Medical Center Drive, Lebanon, NH 03756.
Mr. Casella: Dartmouth Institute for Health Policy and Clinical Practice, The Dartmouth Institute for Health Policy and Clinical Practice, Center for Education, 30 Lafayette Street, Lebanon, NH 03766.
Author Contributions: Conception and design: S. Woloshin, L.M. Schwartz, S.L. Casella.
Analysis and interpretation of the data: S. Woloshin, L.M. Schwartz, S.L. Casella, R.J. Larson.
Drafting of the article: S. Woloshin, L.M. Schwartz, S.L. Casella.
Critical revision of the article for important intellectual content: S. Woloshin, L.M. Schwartz, A.T. Kennedy.
Final approval of the article: S. Woloshin, L.M. Schwartz, A.T. Kennedy, R.J. Larson.
Statistical expertise: S. Woloshin, L.M. Schwartz.
Obtaining of funding: S. Woloshin, L.M. Schwartz.
Administrative, technical, or logistic support: L.M. Schwartz.
Collection and assembly of data: S. Woloshin, S.L. Casella, A.T. Kennedy, R.J. Larson.
Woloshin S, Schwartz LM, Casella SL, Kennedy AT, Larson RJ. Press Releases by Academic Medical Centers: Not So Academic?. Ann Intern Med. 2009;150:613-618. doi: 10.7326/0003-4819-150-9-200905050-00007
Download citation file:
Published: Ann Intern Med. 2009;150(9):613-618.
The news media are often criticized for exaggerated coverage of weak science. Press releases, a source of information for many journalists, might be a source of those exaggerations.
To characterize research press releases from academic medical centers.
Press releases from 10 medical centers at each extreme of U.S. News & World Report's rankings for medical research.
Press release quality.
Academic medical centers issued a mean of 49 press releases annually. Among 200 randomly selected releases analyzed in detail, 87 (44%) promoted animal or laboratory research, of which 64 (74%) explicitly claimed relevance to human health. Among 95 releases about primary human research, 22 (23%) omitted study size and 32 (34%) failed to quantify results. Among all 113 releases about human research, few (17%) promoted studies with the strongest designs (randomized trials or meta-analyses). Forty percent reported on the most limited human studiesâ€”those with uncontrolled interventions, small samples (<30 participants), surrogate primary outcomes, or unpublished dataâ€”yet 58% lacked the relevant cautions.
The effects of press release quality on media coverage were not directly assessed.
Press releases from academic medical centers often promote research that has uncertain relevance to human health and do not provide key facts or acknowledge important limitations.
National Cancer Institute.
News reports often exaggerate the importance of medical research.
The researchers reviewed press releases issued by academic medical centers. They found that many press releases overstated the importance of study findings while underemphasizing cautions that limited the findings' clinical relevance.
The researchers did not attempt to see how the press releases influenced actual news stories.
Academic center press releases often promote research with uncertain clinical relevance without emphasizing important cautions or limitations.
Medical journalism is often criticized for what reporters cover (for example, preliminary work) and how they cover it (for example, turning modest findings into miracles) (1–4). Critics often place blame squarely on the media, pointing out that few journalists are trained to critically read medical research or suggesting that sensationalism is deliberate: Whereas scientists want to promote the truth, the media just want to sell newspapers.
But exaggeration may begin with the journalists' sources. Researchers and their funders, and even medical journals, often court media attention through press releases. The strategy works: Press releases increase the chance of getting media coverage (5, 6) and shape subsequent reporting (7). An independent medical news rating organization found that more than one third of U.S. health news stories seemed to rely solely or largely on press releases (1).
Academic medical centers produce large volumes of research and attract press coverage through press releases. Because these centers set the standard for research and education in U.S. medicine, one might assume that their press releases are measured and unexaggerated. To test this assumption, we examined press releases from academic medical centers in a systematic manner.
We selected the 10 highest-ranked and 10 lowest-ranked of the academic medical centers covered in U.S. News & World Report's medical school research rankings (8) that issued at least 10 releases in 2005. In addition, we identified each medical school's affiliates by using an Association of American Medical Colleges database. The Appendix Table lists the centers and their affiliated press offices. We initially intended to compare press releases by research ranking, but because we found few differences, we report findings across the entire study sample, highlighting the few differences by rank where they exist.
During 2006, a former medical school press officer conducted semistructured (15-minute) telephone interviews with “the person in charge” of media relations at the 20 centers. The interview script (Appendix) covered release policy (how is research chosen?), production (writing, review, researcher's role), and an overall assessment (perceived pressure for media results, praise, or backlash).
We searched Eureka Alert (a press release database) for all “medical and health” releases issued by the 20 centers and their affiliates in 2005. The Figure summarizes the search results.
*Of the medical schools that issued at least 10 press releases in 2005.
After excluding duplicate or nonresearch releases (such as those announcing grants), we determined study focus (animal or human) and publication status; if the study was published, we characterized the journal's academic prominence by using the Thompson Scientific Journal Citation Reports “impact factor.”
We randomly selected 200 press releases (10 per center) and assessed presentation of study facts, cautions, and presence of exaggeration by using separate coding schemes for human and nonhuman studies (the Appendix includes both schemes). The schemes included 32 unique items (10 for human studies only, 4 for nonhuman studies only, and 18 common to both). Sixteen items involved simply extracting facts from the release (for example, study size); the other 16 items required subjective judgments (for example, were there cautions about confounding for observational studies?). To confirm key study details (such as population, design, and size), we obtained the research reports (journal article or meeting abstract) referenced in the releases.
Two research assistants who were blinded to the study's purpose independently coded releases. To measure reliability, the coders and investigators reviewed each code's definition and then reread the release to confirm (or change) their code. Errors due to definition or data entry problems were corrected before agreement was calculated. Intercoder agreement was “nearly perfect” (9) for both sets of items: for factual items, κ was 1.0 (range, 0.98 to 1.0), and for subjective items, κ was 0.97 (range, 0.79 to 1.0). Disagreements were resolved by 4 of the investigators. We used STATA, version 10 (StataCorp, College Station, Texas) for all analyses.
The project was funded by the National Cancer Institute and the Robert Wood Johnson Generalist Faculty Scholars Program. Neither source had any role in study design, conduct, or analysis or in the decision to seek publication.
All centers said that investigators routinely request press releases and are regularly involved in editing and approving them (Table 1). Only 2 centers routinely involve independent reviewers. On average, centers employed 5 press release writers (the highest-ranked centers had more writers than lower-ranked centers [mean, 6.6 vs. 3.7]). Three centers said that they trained writers in research methods and results presentation, but most expected writers to already have these skills and hone them on the job. All 20 centers said that media coverage is an important measure of their success, and most report the number of “media hits” garnered to the administration.
Table 1 shows that the centers issued 989 medical research-related releases in 2005. The centers averaged 49 releases per year; the range was 13 (Brown Medical School) to 186 (Johns Hopkins University School of Medicine). Twelve percent of the releases promoted unpublished research from scientific meetings. Higher-ranked centers issued more releases than lower-ranked centers (743 vs. 246) and were less likely to promote unpublished research (9% vs. 20%).
Table 2 summarizes the measures of press release quality.
Of the 95 releases about primary human research (excluding unstructured reviews and decision models), 77% provided study size and most (66%) quantified the main finding in some way; 47% used at least 1 absolute number, the most transparent way to represent results (10, 11). Few releases (12%) provided access to the full scientific report.
Two thirds of the 200 randomly selected releases reported study funding sources; 4% noted conflicts of interest (either that none [3 releases] or some existed [4 releases]).
Of all 113 releases about human studies, 17% promoted published studies with the strongest designs (randomized trials or meta-analyses). Forty percent reported on inherently limited studies (for example, sample size <30, uncontrolled interventions, primary surrogate outcomes, or unpublished meeting reports). Fewer than half (42%) provided any relevant caveats. For example, a release titled “Lung-sparing treatment for cancer proving effective” (which concluded that treatment was “a safe and effective way to treat early stage lung cancer in medically inoperable patients”) lacked cautions about this uncontrolled study of 70 patients.
Among the 87 releases about animal or laboratory studies, most (64 of 87) explicitly claimed relevance to human health, yet 90% lacked caveats about extrapolating results to people. For example, a release about a study of ultrasonography reducing tumors in mice, titled “Researchers study the use of ultrasound for treatment of cancer,” claimed (without caveats) that “in the future, treatments with ultrasound either alone or with chemotherapeutic and antivascular agents could be used to treat cancers.”
Twenty-nine percent of releases (58 of 200) were rated as exaggerating the finding's importance. Exaggeration was found more often in releases about animal studies than human studies (41% vs. 18%).
Almost all releases (195 of 200) included investigator quotes, 26% of which were judged to overstate research importance. For example, a release for a study of mice with skin cancer, titled “Scientists inhibit cancer gene. Potential therapy for up to 30 percent of human tumors,” quoted the investigator as saying that “the implication is that a drug therapy could be developed to reduce tumors caused by Ras without significant side effects.” Coders thought that the “implication” exaggerated the study findings, because neither treatment efficacy nor tolerability in humans was assessed.
Although 24% (47 of 200) of releases used the word “significant,” only 1 clearly distinguished statistical from clincial significance. All other cases were ambiguous, creating an opportunity for overinterpretation: for example, “Not-for-profit hospitals consistently had significantly higher scores than for-profit hospitals.”
Press releases issued by 20 academic medical centers frequently promoted preliminary research or inherently limited human studies without providing basic details or cautions needed to judge the meaning, relevance, or validity of the science. Our findings are consistent with those of other analyses of pharmaceutical industry (12) and medical journal (13) press releases, which also revealed a tendency to overstate the importance and downplay (or ignore) the limitations of research.
Our study has several limitations. First, content analysis coding always involves subjectivity. The high level of agreement observed among coders, however, is reassuring. Second, our findings are based on 20 centers, which may raise concern about generalizability. Because only the top 52 (of 125) centers receive a U.S. News & World Report ranking, we believe that our findings represent a best-case scenario. Third, our coding scheme was not exhaustive. We focused on study details and cautions we thought were of highest priority because press releases are typically 1 page or fewer.
Most important, because we did not analyze subsequent news coverage of press-released research, we cannot directly link problems with press releases (such as lack of cautions or numbers, or exaggeration) with those in news reports. Our findings would be stronger if we had shown integration of exaggerated information from releases into news stories. Nevertheless, press releases matter: They attract journalists' attention, and although the practice is generally discouraged, many health news stories—perhaps as many as one third—seem to rely largely or solely on the press release (1). Journalists worry that such reliance will increase, given newsroom cutbacks and greater demand for online news (7). The problems that we document are very similar to those seen in analyses of medical news: no quantification of main results (14–16) and no mention of intervention side effects (14–16), conflicts of interest (16), or study limitations (1–4). We believe that academic centers contribute to poor media coverage and are forgoing an opportunity to help journalists do better.
The quickest strategy for improvement would be for centers to issue fewer releases about preliminary research, especially unpublished scientific meeting presentations, because findings often change substantially—or fail to hold up—as studies mature (17). Forty percent of meeting abstracts and 25% of abstracts that garner media attention (18) are never subsequently published as full reports in medical journals (19). Similarly, centers should limit releases about animal or laboratory research. Although such research is important, institutions should not imply clinical benefit when it does not exist (and may not for years, if ever): Two thirds of even highly cited animal studies fail to translate into successful human treatments (20).
When press releases are issued, they should include basic study facts and explicit cautions. For example, press releases should remind journalists that strong inferences cannot be drawn from uncontrolled studies, or that surrogate outcomes do not always translate into clinical outcomes. Although good press releases will probably help, quality reporting also requires good critical evaluation skills. Fortunately, journalists have opportunities to acquire these skills, through such programs as the Association of Health Care Journalists seminars; the Knight Science Journalism Medical Evidence Boot Camp at MIT; and “Medicine in the Media: The Challenge of Reporting on Medical Research,” a workshop sponsored by the National Institutes of Health, the Dartmouth Institute for Health Policy and Clinical Practice, and the Department of Veterans Affairs.
Investigators can also do better. They could forgo requesting releases for studies with obvious limitations and review releases before dissemination, taking care to temper their tone (particularly their own quotes, which we often found overly enthusiastic).
By issuing fewer but better press releases, academic centers could help reduce the chance that journalists and the public are misled about the importance or implications of medical research. Centers might get less press coverage, but they would better serve their mission: to improve the health of their communities and the larger society in which they reside.
The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.
Hematology/Oncology, Education and Training, Healthcare Delivery and Policy.
Results provided by:
Copyright © 2016 American College of Physicians. All Rights Reserved.
Print ISSN: 0003-4819 | Online ISSN: 1539-3704
Conditions of Use
This PDF is available to Subscribers Only