0
Research and Reporting Methods |

A Systematic Examination of the Citation of Prior Research in Reports of Randomized, Controlled Trials FREE

Karen A. Robinson, PhD; and Steven N. Goodman, MD, MHS, PhD
[+] Article and Author Information

From Johns Hopkins School of Medicine and Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland.


Acknowledgment: The authors thank Lisa Wilson, ScM, for her assistance in screening the search results; Ian J. Saldanha, MBBS, MPH, for his help in screening the search results and data abstraction; and Carol Thompson, MS, for her work in developing the scripts to manipulate the data.

Potential Conflicts of Interest: None disclosed. Forms can be viewed at www.acponline.org/authors/icmje/ConflictOfInterestForms.do?msNum=M10-1621.

Reproducible Research Statement:Study protocol: Not available. Statistical code: Available from Dr. Robinson (e-mail, krobin@jhmi.edu). Data set: The purchased data set is not available; the list of meta-analyses used for the analysis is available from Dr. Robinson (e-mail, krobin@jhmi.edu).

Requests for Single Reprints: Karen A. Robinson, PhD, Divisions of General Internal Medicine and Health Sciences Informatics, Department of Medicine, Johns Hopkins University, 1830 East Monument Street, Room 8069, Baltimore, MD 21287; e-mail, krobin@jhmi.edu.

Current Author Addresses: Dr. Robinson: Divisions of General Internal Medicine and Health Sciences Informatics, Department of Medicine, Johns Hopkins University, 1830 East Monument Street, Room 8069, Baltimore, MD 21287.

Dr. Goodman: Division of Biostatistics, Suite 1103, Johns Hopkins University, 550 North Broadway, Baltimore, MD 21209.

Author Contributions: Conception and design: K.A. Robinson, S.N. Goodman.

Analysis and interpretation of the data: K.A. Robinson, S.N. Goodman.

Drafting of the article: K.A. Robinson, S.N. Goodman.

Critical revision of the article for important intellectual content: K.A. Robinson, S.N. Goodman.

Final approval of the article: K.A. Robinson, S.N. Goodman.

Statistical expertise: S.N. Goodman.

Administrative, technical, or logistic support: K.A. Robinson.

Collection and assembly of data: K.A. Robinson.


Ann Intern Med. 2011;154(1):50-55. doi:10.7326/0003-4819-154-1-201101040-00007
Text Size: A A A

Background: A randomized, controlled trial (RCT) should not be started or interpreted without accounting for evidence from preceding RCTs addressing the same question. Research has suggested that evidence from prior trials is often not accounted for in reports of subsequent RCTs.

Objective: To assess the extent to which reports of RCTs cite prior trials studying the same interventions.

Design: Meta-analyses published in 2004 that combined 4 or more trials were identified; within each meta-analysis, the extent to which each trial report cited the trials that preceded it by more than 1 year was assessed.

Measurements: The proportion of prior trials that were cited (prior research citation index), the proportion of the total participants from prior trials that were in the cited trials (sample size citation index), and the absolute number of trials cited were calculated.

Results: 227 meta-analyses were identified, comprising 1523 trials published from 1963 to 2004. The median prior research citation index was 0.21 (95% CI, 0.18 to 0.24), meaning that less than one quarter of relevant reports were cited. The median sample size citation index (0.24 [CI, 0.21 to 0.27]) was similar, suggesting that larger trials were not selectively cited. Of the 1101 RCTs that had 5 or more prior trials to cite, 254 (23%) cited no prior RCTs and 257 (23%) cited only 1. The median number of prior cited trials was 2, which did not change as the number of citable trials increased. The mean number of preceding trials cited by trials published after 2000 was 2.4, compared with 1.5 for those published before 2000 (P < 0.001).

Limitation: The investigators could not ascertain why prior trials were not cited, and noncited trials may have been taken into account in the trial design and proposal stages.

Conclusion: In reports of RCTs published over 4 decades, fewer than 25% of preceding trials were cited, comprising fewer than 25% of the participants enrolled in all relevant prior trials. A median of 2 trials was cited, regardless of the number of prior trials that had been conducted. Research is needed to explore the explanations for and consequences of this phenomenon. Potential implications include ethically unjustifiable trials, wasted resources, incorrect conclusions, and unnecessary risks for trial participants.

Primary Funding Source: None.

Few principles are more fundamental to the scientific and ethical validity of clinical research than that studies should address questions needing to be answered, and that they are designed in a way that will produce a meaningful answer. A prerequisite for either of those goals is that relevant prior research be properly identified.

Although the importance of properly identifying and synthesizing clinical evidence for the evaluation of treatments has been recognized for almost 3 decades, the importance of doing this before or after a particular randomized, controlled trial (RCT) has been less appreciated. In their audits of discussion sections of trial reports in 5 major medical journals, Clarke and colleagues (14) found no apparent attempt to set the results of trials in the context of prior trials in 76% of trial reports in 1997, 90% in 2001, 67% in 2005, and 54% in 2009. In a citation analysis of 66 RCTs of the use of aprotinin to decrease blood loss during surgery, Fergusson and associates (5) found very sparse referencing of prior work, with only 15% of subsequent trials mentioning even the largest trial in the field. Research assessing the inclusion of relevant studies in narrative reviews has also shown that a substantial proportion of relevant prior research is ignored (610).

There has been no systematic study of whether published RCTs acknowledged prior related research (2, 4, 11). We therefore conducted such a study, using RCTs included in meta-analyses in all areas of medicine and across all journals.

We identified cohorts of RCTs addressing the same research question by identifying systematic reviews that combined trial results using the mathematical technique of meta-analysis. We then examined the references of every trial included in the meta-analysis to determine how many of the preceding trials were cited.

Identification of Meta-analyses

We searched Web of Science in July 2007 for all systematic reviews published in 2004 of health care questions that included a meta-analysis that combined 4 or more randomized, controlled trials. Our search strategy is shown in the Appendix.

We selected 1 cohort of RCTs comprising a single meta-analysis within the systematic review (that is, the RCTs included in 1 forest plot or 1 summary estimate). When multiple meta-analyses were reported in the same review, the analysis with the largest number of RCTs was selected. A database from Web of Science provided the articles referenced by each systematic review, and the articles referenced by those articles. These data were manipulated and analyzed by using Python (Python Software Foundation, Wolfeboro Falls, New Hampshire) and Stata (StataCorp, College Station, Texas).

Statistical Analysis
Prior Research Citation Index and Sample Size Citation Index

For each RCT, the prior research citation index (PRCI) was calculated as the number of cited RCTs divided by the number of RCTs eligible to cite. The number of RCTs eligible to cite, or number of citable trials, was defined as all RCTs in the cohort that were published more than 1 year before the citing RCT. The 1-year interval was used to account for the time during which a report of the citing RCT may be in the publication process; this is the same period that Fergusson and associates used (5). We repeated the analysis with intervals of 2, 5, and 10 years.

We also calculated the sample size citation index (SSCI) to assess the proportion of prior study participants captured in cited research. This was defined as the number of participants in cited RCTs divided by the number of participants in RCTs eligible to cite.

In addition to these indices, we examined the absolute number of cited trials, summarized as means and medians. Because we identified groups of RCTs from within the same meta-analysis, a clustering effect could result in that trials following each other would tend to cite the same literature. We explored and minimized the effect of clustering within meta-analyses in several ways. First, we stratified our analyses by the number of preceding citable trials. That stratification generally used 1 trial per meta-analysis, because trials within a meta-analysis could share the same number of preceding trials only if they were published around the same time. In addition, the PRCI and SSCI were calculated by using the unweighted average of individual trial measures as well as within each cluster (meta-analysis cohort). However, the latter measure turned out not to be useful because the PRCI depended on cluster size, for reasons that are explained in the Results section. Confidence intervals were calculated by using bootstrap techniques (1213).

Qualitative Analysis of RCTs

We assessed the discussion and introduction sections of a selection of RCTs as Clarke and colleagues did (14). We identified the RCTs with the highest and lowest PRCIs. Fifteen of the 377 RCTs in the lowest PRCI quintile and 15 of the 381 RCTs in the highest quintile were randomly selected, and their introduction and discussion sections were reviewed. We assessed whether the article claimed to report the first trial on the subject; whether it referenced a systematic review; whether methods were described to identify prior trials; and whether there was any attempt to incorporate the prior results, quantitatively or qualitatively, with the data from that study.

Role of the Funding Source

No funding was received for this study.

We retrieved 655 citations in our search for meta-analyses. A total of 257 systematic reviews were eligible after abstract and full-text screening. From each of the reviews, we chose 1 meta-analysis with at least 4 RCTs, a group comprising 3256 selected RCTs. When we excluded meta-analyses with RCTs that were not found in Web of Science, 227 meta-analyses remained, comprising 1523 RCTs.

For these 1523 RCTs, the number of citable prior RCTs ranged from 3 to 58. The mean number of citable RCTs was 9.7, of which an average of 1.9 were cited (median, 2; lower and upper deciles, 0 and 4). Of the 1101 RCTs that had 5 or more prior trials to cite (72% of the total), 257 (23%) cited 1 prior trial and 254 (23%) cited none. The proportion of reports of RCTs that cited 0 or 1 prior relevant trial (46%) remained essentially constant as the number of citable trials increased (Table 1).

Table Jump PlaceholderTable 1.  Number of Reports That Cited 0 or 1 Prior Relevant Trial

Figure 1 shows the distribution of the number of cited trials according to the number of citable trials. No gradient was observed; the median number of cited preceding trials varied from 1 to 2, with no apparent pattern. This analysis, which minimized any clustering effect, also showed that there was little effect of clustering: Later trials in a meta-analysis on average cited no more preceding research than did earlier trials. Therefore, for ease of interpretation, subsequent summaries are based on per-trial averages.

Grahic Jump Location
Figure 1.
Number of trials cited as a function of the number of citable trials.

Boxes span the interquartile range; the white line in each box is the median. Whiskers indicate the upper and lower adjacent values. The circles above the whiskers are individual outliers. The black horizontal line is the median number of trials cited.

Grahic Jump Location

The surprising constancy of the number of cited trials with increasing number of citable trials meant that the 2 proportional citation measures—the PRCI and SSCI—must be interpreted with care. Across the 1523 RCTs, the median PRCI was 0.21 (95% CI, 0.18 to 0.24; lower and upper deciles, 0 and 0.67). As expected, the PRCI decreased as the number of citable trials increased because the absolute number cited was constant (Figure 2). Thus, the PRCI and SSCI of any collection of trials depend in part on the average number of citable trials within that group.

Grahic Jump Location
Figure 2.
Proportion of trials cited, by number of citable trials.
Grahic Jump Location

Allowing for 2 years between the publication of an RCT and its citation in subsequent reports of RCTs did not change the PRCI. However, RCTs cited a lower proportion of older trials than more recent ones. The median PRCI for prior trials published 5 or more years earlier was 0.11 (747 trials; mean PRCI, 0.24), and at 10 years, the proportion of prior trials cited was even lower (311 trials; median PRCI, 0.04; mean PRCI, 0.17). Conversely, we examined whether more recent trials were citing more preceding research than older trials: that is, whether citation practices were improving. A small improvement (P < 0.001) was seen in trials published after 2000 (citing mean, 2.4; median, 2) compared with those published earlier (citing mean, 1.5; median, 1) (Table 2). Pediatrics RCTs cited the most prior trials (2.76), whereas immunology RCTs cited the fewest (1.44) (Appendix Table).

Table Jump PlaceholderTable 2.  Number of Trials Cited, by Publication Year of Citing Trial
Table Jump PlaceholderAppendix Table.  PRCI, by Discipline

We assessed the proportion of participants in cited trials in 1261 RCTs with sample size data. The median SSCI was 0.24 (CI, 0.21 to 0.27; lower and upper deciles, 0 and 0.83); this value is almost the same as the median PRCI (0.21), meaning that larger trials were not selectively cited and information from 76% of participants enrolled in prior RCTs was not acknowledged.

Finally, we did a qualitative assessment of a randomly selected group of 30 RCTs (Table 3). The 15 selected reports of trials in the lowest quintile of PRCI had a mean of 13 prior trials to cite (range, 3 to 41 trials) and none of the reports citing any of the prior trials in their cohort (PRCI, 0). The 15 trials in the highest quintile had a mean of 5 prior studies to cite (range, 3 to 9 trials), with a mean number cited of 3.4. None of the 30 reports described a search method for prior trials. Four of the 15 reports in the lowest PRCI quintile claimed to be the first trial assessing that research question. Two of these 4 specifically stated that there were no prior RCTs on the topic, when in fact one had 4 and the other had 8. The other 2 trials, in which 6 and 9 prior trials were uncited, suggested that their trial population was unique. In the highest quintile of PRCI, 1 of the 15 reports claimed to be the first trial (Table 3).

Table Jump PlaceholderTable 3.  Summary of Results From Qualitative Review of Selected Randomized, Controlled Trials

Only 1 of the 30 reports of trials cited an existing systematic review; this report also cited the most prior research. We found no reports that formally incorporated the results of prior trials.

To our knowledge, this study is the first attempt to systematically assess the citation of prior research in reports of RCTs across the full range of health disciplines and over 4 decades. We found that a median of only 21% of prior research was cited in reports of RCTs that had 3 or more relevant RCTs to reference. Our findings are similar to the proportion of eligible prior trials cited in Fergusson and associates' cumulative meta-analysis (25%) (5) and a study by Schmidt and Gotzsche (21%) (9). Among RCTs that had 5 or more prior studies to cite, 46% cited none or only 1, and this proportion increased as the potential number of prior trials to cite increased. Remarkably, we found that the median absolute number of trials cited did not vary as the number of citable trials increased. Hence, the more RCT evidence that existed, the more likely that investigators on subsequent trials would ignore it.

In many domains of research, citation accords intellectual credit and perhaps priority of discovery (14), but multiple experiments or claims do not constitute arithmetically more evidential support for a hypothesis. However, every RCT contributes to the cumulative evidence for an intervention's effect, so each omission contributes to an understatement of that evidence. Accurate representation of the prior cumulative evidence is necessary both to ethically justify a trial and to make proper inferences. Studying prior research also can lead to designs more likely to fill evidence gaps. Although the presence of a trial citation does not tell us how information from that trial was used, the absence of a citation almost guarantees that it was not.

The findings here show not just that the prior evidence is understated but also that it is barely acknowledged. Our finding of very limited citation is particularly concerning because it indicates that evidence is missing and because selection of the few trials that are cited is likely to have been biased (78, 1416).

There are several possible explanations for the extreme discrepancy between the evidence as perceived by systematic reviewers and the evidence acknowledged by trial investigators. One is that trial investigators and meta-analysts see the evidential landscape differently from one another. Most trials do not exactly replicate a previous study; typically, the new study is designed somewhat differently from previous ones, making it almost unique from the investigator's perspective. In contrast, meta-analysts might be more tolerant of modest qualitative dissimilarities among trials, believing that studies of similar interventions across different populations and settings are mutually informative. However, this explanation does not account for the constant average number of citations as the number of citable trials increased.

The possibility that journal space limitations are causing this lack of citation seems unlikely. We find it implausible that authors are being forced to limit themselves to 2 or fewer of their most critical citations by page or reference list limitations.

We found that a smaller proportion of older articles was cited than those published more recently, suggesting that evidence from older trials tends to be neglected, which has not been reported previously (1617). We also found that the few studies that were cited were not the largest ones. This is consistent with prior research on citation bias showing that strength of evidence, whether qualitative or quantitative, did not predict citation (16, 1823).

When we examined the introduction and discussion sections in the reports of 30 trials in a similar manner to Clarke and colleagues (14), we found that reports were not citing systematic reviews instead of individual trials, nor were they integrating their findings with the trials they did cite, and several claimed to be the first trial even when many trials preceded them.

There are many incentives for the patterns seen here. Foremost is the need to claim that a given RCT is the first to address a particular question. If enough particulars about the design are included, every RCT can legitimately claim to be “the first.” This powerful incentive affects both funders and journals. Institutional review boards have neither the capacity nor the charge to second-guess a researcher's claim that a new RCT is needed. The obvious remedy—requiring a systematic review of relevant literature—is hampered by a lack of necessary skills and resources, the perceived delay it would impose, and the lack of awareness that a problem exists. Even when prior systematic reviews exist, they are often not used (24).

Our methodology has limitations. First, we used meta-analyses to define the citable set of RCTs. We accepted the systematic reviewers' judgment that the trials were similar enough to justify mathematical pooling, a fairly high bar for similarity for the much lower criterion of citation. When conclusions of systematic reviews vary, it is usually because of differing trial selection criteria (2527). Thus, the numbers of eligible trials used here could vary, but this variation is small in contrast to the large gaps seen and would not explain the near constancy of the number of studies cited regardless of the number eligible.

In addition, the pool of meta-analyses was limited to 1 year (2004), but the data set included RCTs from many decades. The publication years for the citing RCTs ranged from 1963 to 2004, with more from 1994 to 2004. A small upward time trend in citation was seen in these data, but we know of no external forces that would have induced citation practices to change substantively since 2004. Repetition of this study with a more recent cohort of meta-analyses is needed to explore current citation practices, although the bulk of the cited RCTs would still extend over the same years that we examined.

We believe that uncited evidence played no role in the inference or in claims about the incremental value of a published study. Studies not cited in a published RCT may have been cited in a funding proposal or institutional review board application. This study does not tell us whether inferences from the published RCTs would have changed materially if the preceding cumulative evidence was examined, or whether the nonciting studies were indeed not justified in light of prior evidence; we can only say that it is impossible for readers to know.

Cumulative meta-analyses have shown repeatedly that randomized experimentation often proceeds beyond the point where key questions have been answered, or where study designs should have been altered (5, 2829). Our study confirms that these lessons probably have not been fully absorbed when it comes to the justification or interpretation of a particular RCT.

There are currently no barriers to funding, conducting, or publishing an RCT without proof that the prior literature had been adequately searched and evaluated. Chalmers recently described this situation as “an ongoing scandal in which research funders, academia, researchers, research ethics committees and scientific journals are all complicit” (5), and there have been many calls for this to change (3034). Requirements to find prior evidence when designing or reporting RCTs have been instituted by some European funding agencies (31); The Lancet(3233); and the Centers for Medicare & Medicaid Services, which require that a covered trial not “unjustifiably duplicate existing studies” (34). The CONSORT (Consolidated Standards of Reporting Trials) guidelines require discussion of results in light of preceding trials or a systematic review (35) but do not require a structured search for such trials. Such a policy could be enforced by the International Committee of Medical Journal Editors, just as trial registration has been required (36). Referencing or posting a systematic review could be required as part of the trial registration process. At a minimum, description of the search strategy used to find prior studies should be reported. In the aprotinin example (5), where few of the more than 60 trials cited each other, a simple PubMed search that includes the terms “aprotinin” and “randomized controlled trial” is sufficient to retrieve every RCT studied by Fergusson and associates.

An incomplete picture of preexisting evidence violates the implicit ethical contract with research participants that the information they provide is necessary and will be useful to others. We found that across many health care disciplines and questions, less than 25% of prior RCT research was cited, representing only about 25% of the participants in earlier trials; moreover, the percentage of ignored RCTs increasing as the number of those RCTs increased, and the proportion of trials citing no prior evidence stayed constant as the evidence accumulated. Funders, institutional review boards, and journals could take several judicious steps that would promote better use of prior research and thereby better satisfy the ethical and scientific requirements for justifiable clinical research.

Clarke M, Alderson P, Chalmers I.  Discussion sections in reports of controlled trials published in general medical journals. JAMA. 2002; 287:2799-801.
PubMed
CrossRef
 
Clarke M, Chalmers I.  Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents? JAMA. 1998; 280:280-2.
PubMed
 
Clarke M, Hopewell S, Chalmers I.  Reports of clinical trials should begin and end with up-to-date systematic reviews of other relevant evidence: a status report. J R Soc Med. 2007; 100:187-90.
PubMed
 
Clarke M, Hopewell S, Chalmers I.  Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting [Letter]. Lancet. 2010; 376:20-1.
PubMed
 
Fergusson D, Glass KC, Hutton B, Shapiro S.  Randomized controlled trials of aprotinin in cardiac surgery: could clinical equipoise have stopped the bleeding? Clin Trials. 2005; 2:218-29.
PubMed
 
Song F, Landes DP, Glenny AM, Sheldon TA.  Prophylactic removal of impacted third molars: an assessment of published reviews. Br Dent J. 1997; 182:339-46.
PubMed
 
Joyce J, Rabe-Hesketh S, Wessely S.  Reviewing the reviews: the example of chronic fatigue syndrome. JAMA. 1998; 280:264-6.
PubMed
 
Ravnskov U.  Quotation bias in reviews of the diet-heart idea. J Clin Epidemiol. 1995; 48:713-9.
PubMed
 
Schmidt LM, Gotzsche PC.  Of mites and men: reference bias in narrative review articles: a systematic review. J Fam Pract. 2005; 54:334-8.
PubMed
 
Hutchison BG, Oxman AD, Lloyd S.  Comprehensiveness and bias in reporting clinical trials. Study of reviews of pneumococcal vaccine effectiveness. Can Fam Physician. 1995; 41:1356-60.
PubMed
 
Garfield E.  Demand citation vigilance. The Scientist. 2002; 16:6.
 
Gould W.  Quantile regression with bootstrapped standard errors. Stata Technical Bulletin. 1992; 9:19-21.
 
Koenker R, Basset G.  Regression quantiles. Econometrica. 1978; 46:33-50.
 
Merton RK.  The Matthew effect in science, II: cumulative advantage and the symbolism of intellectual property. Isis. 1988; 79:606-23.
 
Gøtzsche PC.  Reference bias in reports of drug trials. Br Med J (Clin Res Ed). 1987; 295:654-6.
PubMed
 
Kulkarni AV, Busse JW, Shams I.  Characteristics associated with citation rate of the medical literature. PLoS One. 2007; 2:403.
PubMed
 
West R, McIlwaine A.  What do citation counts count for in the field of addiction? An empirical evaluation of citation counts and their link with peer ratings of quality. Addiction. 2002; 97:501-4.
PubMed
 
Ravnskov U.  Cholesterol lowering trials in coronary heart disease: frequency of citation and outcome. BMJ. 1992; 305:15-9.
PubMed
 
Bhandari M, Busse J, Devereaux PJ, Montori VM, Swiontkowski M, Tornetta Iii P. et al.  Factors associated with citation rates in the orthopedic literature. Can J Surg. 2007; 50:119-23.
PubMed
 
Lokker C, McKibbon KA, McKinlay RJ, Wilczynski NL, Haynes RB.  Prediction of citation counts for clinical articles at two years using data available within three weeks of publication: retrospective cohort study. BMJ. 2008; 336:655-7.
PubMed
 
Hyett M, Parker G.  Can the highly cited psychiatric paper be predicted early? Aust N Z J Psychiatry. 2009; 43:173-6.
PubMed
 
Nieminen P, Carpenter J, Rucker G, Schumacher M.  The relationship between quality of research and citation frequency. BMC Med Res Methodol. 2006; 6:42.
PubMed
 
Callaham M, Wears RL, Weber E.  Journal prestige, publication bias, and other characteristics associated with citation of published studies in peer-reviewed journals. JAMA. 2002; 287:2847-50.
PubMed
 
Cooper NJ, Jones DR, Sutton AJ.  The use of systematic reviews when designing studies. Clin Trials. 2005; 2:260-4.
PubMed
 
Chou R, Baisden J, Carragee EJ, Resnick DK, Shaffer WO, Loeser JD.  Surgery for low back pain: a review of the evidence for an American Pain Society Clinical Practice Guideline. Spine (Phila Pa 1976). 2009; 34:1094-109.
PubMed
 
Hayden JA, Chou R, Hogg-Johnson S, Bombardier C.  Systematic reviews of low back pain prognosis had variable methods and results: guidance for future prognosis reviews. J Clin Epidemiol. 2009; 62.
PubMed
 
Chou R, Huffman LH, American Pain Society.  Medications for acute and chronic low back pain: a review of the evidence for an American Pain Society/American College of Physicians clinical practice guideline. Ann Intern Med. 2007; 147:505-14.
PubMed
 
Baum ML, Anish DS, Chalmers TC, Sacks HS, Smith H Jr, Fagerstrom RM.  A survey of clinical trials of antibiotic prophylaxis in colon surgery: evidence against further use of no-treatment controls. N Engl J Med. 1981; 305:795-9.
PubMed
 
Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC.  A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. Treatments for myocardial infarction. JAMA. 1992; 268:240-8.
PubMed
 
Savulescu J, Chalmers I, Blunt J.  Are research ethics committees behaving unethically? Some suggestions for improving performance and accountability. BMJ. 1996; 313:1390-3.
PubMed
 
Boissel JP, Chalmers I, Flather MD, Franzosi FG, Victor N.  European Science Foundation Policy Briefing: Controlled Clinical Trials (Project Number SPB No 13). May 2001. Accessed atwww.esf.org/research-areas/medical-sciences/activities/science-policy/controlled-clinical-trials.htmlon 15 January 2008.
 
Young C, Horton R.  Putting clinical trials into context. Lancet. 2005; 366:107-8.
PubMed
 
Clark S, Horton R.  Putting research into context—revisited. Lancet. 2010; 376:10-1.
PubMed
 
Centers for Medicare & Medicaid Services.  NCD for routine costs in clinical trials (301.1). 9 July 2007. Accessed atwww.cms.hhs.gov/mcdon 1 September 2007.
 
Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D, CONSORT GROUP (Consolidated Standards of Reporting Trials).  The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med. 2001; 134:663-94.
PubMed
 
International Committee of Medical Journal Editors.  Uniform requirements for manuscripts submitted to biomedical journals: writing and editing for biomedical publication. Accessed atwww.icmje.org/on 10 January 2008.
 
Appendix: Search Strategy for Meta-analyses

To identify meta-analyses, we searched Web of Science in July 2007 by using the following strategy:

  1. TS=meta-analy* OR TS=metanaly*, DocTypes=review OR article, 2004

  2. TS=random* OR TS = RCT*

  3. #1 AND #2

Search results were downloaded into a reference management software package (ProCite, Thomson Scientific, Stamford, Connecticut) and screened for eligibility by 2 independent reviewers using title and abstract. Citations were excluded from further consideration if they were not a systematic review or meta-analysis, did not report a meta-analysis, were not an original report (overviews, commentaries, and editorials were excluded), did not include RCTs or were not a review of RCTs, did not include humans, did not address a health care question, or did not include a specific meta-analysis that included 3 or more RCTs.

We defined “systematic reviews” as reviews with clear and explicit methods, including details about the search for relevant trials. We included meta-analyses only if the analysis was completed as part of a systematic review (that is, we excluded analyses that pooled studies from 1 center or 1 study group only). Disagreements were resolved by consensus. Full articles were retrieved for all citations deemed eligible and for articles for which it was unclear from the abstract whether a meta-analysis was completed or if it was uncertain if RCTs were included. The same criteria were applied during the screening of the full-text articles to determine eligibility.

Figures

Grahic Jump Location
Figure 1.
Number of trials cited as a function of the number of citable trials.

Boxes span the interquartile range; the white line in each box is the median. Whiskers indicate the upper and lower adjacent values. The circles above the whiskers are individual outliers. The black horizontal line is the median number of trials cited.

Grahic Jump Location
Grahic Jump Location
Figure 2.
Proportion of trials cited, by number of citable trials.
Grahic Jump Location

Tables

Table Jump PlaceholderTable 1.  Number of Reports That Cited 0 or 1 Prior Relevant Trial
Table Jump PlaceholderTable 2.  Number of Trials Cited, by Publication Year of Citing Trial
Table Jump PlaceholderAppendix Table.  PRCI, by Discipline
Table Jump PlaceholderTable 3.  Summary of Results From Qualitative Review of Selected Randomized, Controlled Trials

References

Clarke M, Alderson P, Chalmers I.  Discussion sections in reports of controlled trials published in general medical journals. JAMA. 2002; 287:2799-801.
PubMed
CrossRef
 
Clarke M, Chalmers I.  Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents? JAMA. 1998; 280:280-2.
PubMed
 
Clarke M, Hopewell S, Chalmers I.  Reports of clinical trials should begin and end with up-to-date systematic reviews of other relevant evidence: a status report. J R Soc Med. 2007; 100:187-90.
PubMed
 
Clarke M, Hopewell S, Chalmers I.  Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting [Letter]. Lancet. 2010; 376:20-1.
PubMed
 
Fergusson D, Glass KC, Hutton B, Shapiro S.  Randomized controlled trials of aprotinin in cardiac surgery: could clinical equipoise have stopped the bleeding? Clin Trials. 2005; 2:218-29.
PubMed
 
Song F, Landes DP, Glenny AM, Sheldon TA.  Prophylactic removal of impacted third molars: an assessment of published reviews. Br Dent J. 1997; 182:339-46.
PubMed
 
Joyce J, Rabe-Hesketh S, Wessely S.  Reviewing the reviews: the example of chronic fatigue syndrome. JAMA. 1998; 280:264-6.
PubMed
 
Ravnskov U.  Quotation bias in reviews of the diet-heart idea. J Clin Epidemiol. 1995; 48:713-9.
PubMed
 
Schmidt LM, Gotzsche PC.  Of mites and men: reference bias in narrative review articles: a systematic review. J Fam Pract. 2005; 54:334-8.
PubMed
 
Hutchison BG, Oxman AD, Lloyd S.  Comprehensiveness and bias in reporting clinical trials. Study of reviews of pneumococcal vaccine effectiveness. Can Fam Physician. 1995; 41:1356-60.
PubMed
 
Garfield E.  Demand citation vigilance. The Scientist. 2002; 16:6.
 
Gould W.  Quantile regression with bootstrapped standard errors. Stata Technical Bulletin. 1992; 9:19-21.
 
Koenker R, Basset G.  Regression quantiles. Econometrica. 1978; 46:33-50.
 
Merton RK.  The Matthew effect in science, II: cumulative advantage and the symbolism of intellectual property. Isis. 1988; 79:606-23.
 
Gøtzsche PC.  Reference bias in reports of drug trials. Br Med J (Clin Res Ed). 1987; 295:654-6.
PubMed
 
Kulkarni AV, Busse JW, Shams I.  Characteristics associated with citation rate of the medical literature. PLoS One. 2007; 2:403.
PubMed
 
West R, McIlwaine A.  What do citation counts count for in the field of addiction? An empirical evaluation of citation counts and their link with peer ratings of quality. Addiction. 2002; 97:501-4.
PubMed
 
Ravnskov U.  Cholesterol lowering trials in coronary heart disease: frequency of citation and outcome. BMJ. 1992; 305:15-9.
PubMed
 
Bhandari M, Busse J, Devereaux PJ, Montori VM, Swiontkowski M, Tornetta Iii P. et al.  Factors associated with citation rates in the orthopedic literature. Can J Surg. 2007; 50:119-23.
PubMed
 
Lokker C, McKibbon KA, McKinlay RJ, Wilczynski NL, Haynes RB.  Prediction of citation counts for clinical articles at two years using data available within three weeks of publication: retrospective cohort study. BMJ. 2008; 336:655-7.
PubMed
 
Hyett M, Parker G.  Can the highly cited psychiatric paper be predicted early? Aust N Z J Psychiatry. 2009; 43:173-6.
PubMed
 
Nieminen P, Carpenter J, Rucker G, Schumacher M.  The relationship between quality of research and citation frequency. BMC Med Res Methodol. 2006; 6:42.
PubMed
 
Callaham M, Wears RL, Weber E.  Journal prestige, publication bias, and other characteristics associated with citation of published studies in peer-reviewed journals. JAMA. 2002; 287:2847-50.
PubMed
 
Cooper NJ, Jones DR, Sutton AJ.  The use of systematic reviews when designing studies. Clin Trials. 2005; 2:260-4.
PubMed
 
Chou R, Baisden J, Carragee EJ, Resnick DK, Shaffer WO, Loeser JD.  Surgery for low back pain: a review of the evidence for an American Pain Society Clinical Practice Guideline. Spine (Phila Pa 1976). 2009; 34:1094-109.
PubMed
 
Hayden JA, Chou R, Hogg-Johnson S, Bombardier C.  Systematic reviews of low back pain prognosis had variable methods and results: guidance for future prognosis reviews. J Clin Epidemiol. 2009; 62.
PubMed
 
Chou R, Huffman LH, American Pain Society.  Medications for acute and chronic low back pain: a review of the evidence for an American Pain Society/American College of Physicians clinical practice guideline. Ann Intern Med. 2007; 147:505-14.
PubMed
 
Baum ML, Anish DS, Chalmers TC, Sacks HS, Smith H Jr, Fagerstrom RM.  A survey of clinical trials of antibiotic prophylaxis in colon surgery: evidence against further use of no-treatment controls. N Engl J Med. 1981; 305:795-9.
PubMed
 
Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC.  A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. Treatments for myocardial infarction. JAMA. 1992; 268:240-8.
PubMed
 
Savulescu J, Chalmers I, Blunt J.  Are research ethics committees behaving unethically? Some suggestions for improving performance and accountability. BMJ. 1996; 313:1390-3.
PubMed
 
Boissel JP, Chalmers I, Flather MD, Franzosi FG, Victor N.  European Science Foundation Policy Briefing: Controlled Clinical Trials (Project Number SPB No 13). May 2001. Accessed atwww.esf.org/research-areas/medical-sciences/activities/science-policy/controlled-clinical-trials.htmlon 15 January 2008.
 
Young C, Horton R.  Putting clinical trials into context. Lancet. 2005; 366:107-8.
PubMed
 
Clark S, Horton R.  Putting research into context—revisited. Lancet. 2010; 376:10-1.
PubMed
 
Centers for Medicare & Medicaid Services.  NCD for routine costs in clinical trials (301.1). 9 July 2007. Accessed atwww.cms.hhs.gov/mcdon 1 September 2007.
 
Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D, CONSORT GROUP (Consolidated Standards of Reporting Trials).  The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med. 2001; 134:663-94.
PubMed
 
International Committee of Medical Journal Editors.  Uniform requirements for manuscripts submitted to biomedical journals: writing and editing for biomedical publication. Accessed atwww.icmje.org/on 10 January 2008.
 

Letters

NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Comments

Submit a Comment
Your Informationist is "In"
Posted on July 29, 2011
Blair Anton
Welch Medical Library, Johns Hopkins University School of Medicine, Baltimore, MD 21205
Conflict of Interest: None Declared

This article is beneficial to informationists, health science librarians who work within the context of research and clinical teams, in our pursuit to assist healthcare providers, scientists and scholars advance their clinical practice and research efforts [1]. With the proliferation of medical information come two time-consuming tasks: searching for all relevant literature from numerous sources, and critically appraising studies for the purposes of basing conclusions, new ideas and support for hypotheses. Informationists facilitate this process by participating in the production of systematic reviews of the literature, which requires the rigorous examination and summation of studies that answer specific clinical questions.

The opportunities to search for information occur at different stages in the research process, from the designing of a trial protocol to the writing of proposals for research grants. Underlying requests to satisfy medical information needs are these familiar questions: "Has anyone done this before, and if so, what was done and how?" and "How can I be sure I haven't missed anything?" Information professionals excel in facilitating the formulation of questions, efficiently identifying appropriate resources and transforming user's questions into competent, effective search queries to find relevant information from electronic databases. Informationists can also provide reasonable certainty that the information sought does not exist. How information is used--or not--relies heavily on how thoroughly the first two tasks in the process are accomplished.

While the authors assert that more research is needed to explore the reasons why prior trials go uncited[1], informationists also see this study as bolstering the case for their continued role in multidisciplinary, collaborative clinical and research teams, and for their participation in scholarship practice endeavors. We are eager to join in these efforts, and offer our time, skills and expertise. It seems we have been here before, not long ago, when we collectively wondered if medical librarians could have helped save a life [2,3].

Blair Anton, MLIS, MS Welch Medical Library, Johns Hopkins University School of Medicine, Baltimore, MD 21205

Nancy K. Roderer, MLS Welch Medical Library, Johns Hopkins University School of Medicine, Baltimore, MD 21205

References:

1. Robinson, KA, Goodman, SN. A Systematic Examination of the Citation of Prior Research in Reports of Randomized Controlled Trials. Ann Intern Med. 2011 Jan; 154(1): 50-54

2. Holst, Ruth. Expert Searching. J Med Libr Assoc. 2005 Jan; 93(1): 41

3. Savulescu, J., Spriggs, M. The Hexamethonium Asthma Study and the Death of a Normal Volunteer in Research. J Med Ethics. 2002 Feb; 28(1): 3- 4

Conflict of Interest:

None declared

Submit a Comment

Summary for Patients

Clinical Slide Sets

Terms of Use

The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.

Toolkit

Want to Subscribe?

Learn more about subscription options

Advertisement
Related Articles
Topic Collections
Forgot your password?
Enter your username and email address. We'll send you a reminder to the email address on record.
(Required)
(Required)