Frank Davidoff, MD
Acknowledgment: The author thanks Peter Houts and Laura Leviton for contributing important input to this editorial.
Disclosures: The author has disclosed no conflicts of interest. The form can be viewed at www.acponline.org/authors/icmje/ConflictOfInterest Forms.do?msNum=M14-1789.
Requests for Single Reprints: Frank Davidoff, MD, 143 Garden Street, Wethersfield, CT 06109.
Davidoff F.; Improvement Interventions Are Social Treatments, Not Pills. Ann Intern Med. 2014;161:526-527. doi: 10.7326/M14-1789
Download citation file:
Published: Ann Intern Med. 2014;161(7):526-527.
Even the most effective medical treatment (test, drug, or procedure) provides its full benefit only if it is applied to the right patient, in the right way and at the right time, every time. Recognition of the unfortunate reality that effective treatments are often applied inconsistently and inappropriately (1) has spurred development of a “science” of health care improvement. Derived in part from initiatives in industry, this nascent discipline is essentially a systematized version of experiential learning (2), informed by a philosophy that encompasses an understanding of human organizations, the nature of variation, and aspects of human psychology (3). Improvement interventions, like pills, can change clinical outcomes; unlike pills, however, they do so by applying innovative social treatments that change the way health care is organized and delivered, thereby narrowing the clinical “knowledge–performance gap.”
Joanne Lynn, MD
Director, Center for Elder Care and Advanced Illness, Altarum Institute
October 10, 2014
Dr. Davidoff is Right, but Would the Annals Publish the Project Done Right?
Dr. Davidoff's editorial is absolutely correct in calling for the framing and statistics suitable for improvement activities rather than continuing to pretend that service delivery organization can be studied like a drug. If the question is how best can we learn to make transitions from hospital to home safely and with minimal need for short-term rehospitalization, quality improvement approaches are so much more likely to succeed than research trials; they teach us so much more about how the local system operates and what else might work; and they teach us these things quickly.
However, even with a dramatic effect upon readmission rates and patient confidence and every other good thing that you'd want to measure, such a project would not be published in Annals or in any other major US medical journal. It would not be funded by NIH and probably not by AHRQ. It would fall entirely outside of the requirements of the Methodology Report for PCORI. In most academic institutions, succeeding in dramatically reducing readmissions would not count toward promotion and tenure - the focus is only on getting those grants and publications.
This is prescription for those concerned with public well-being and policymaker needs coming to revile academic medicine, at least with regard to providing guidance for health systems delivery improvement. Again, if the point is to improve things, then the method is usually management guided by Shewhart statistics. The findings can be published if they are important for others who are embarking on this journey, especially if they are synthesized over a number of sites so that context and pace of gain can be understood and the gains sustained. And the endeavor should be able to be funded and important to promotion and standing in academic medicine.
Sarah Szanton, PhD, Bruce Leff, Laura Gitlin
Johns Hopkins University
October 15, 2014
We read Dr. Davidoff’s1 thoughtful editorial with interest. He highlights the critical tension in approaches to evaluating health care improvement efforts-- the tension between benefits from the rigorous comparison of static-protocol randomized controlled trials (RCT) versus quality improvement (QI) programs that adapt to inevitable contextual challenges during implementation.
The RCT and QI approaches have exactly opposite weaknesses. The RCT cannot adapt and QI programs cannot conclude that observed outcomes are due to the program under investigation. Our team has an interesting vantage point from which to view this tension. We are simultaneously conducting two tests of the same care model, CAPABLE. One is an NIH funded RCT and one is a Center for Medicare and Medicaid Services Innovations Center funded Quality Improvement study.
CAPABLE is designed to address unmet needs and functional difficulties experienced by older adults at home that are not addressed by our health care system despite significant negative consequences. CAPABLE includes $1300 of handyman home repair and modifications and up to 10 home visits by occupational therapists and nurses to enable older adults to carrying out self-care and discretionary activities of most importance to them. A previous RCT pilot study of CAPABLE found that older adults in the CAPABLE group decreased the number of ADL difficulties by 67% from baseline while those in the control group who received home visits without a functional component cut their ADL difficulties only 19%.2
The rapid cycle evaluation plan of the CMMI funded test of CAPABLE requires that we examine quarterly primary outcomes resulting in study protocol changes including more flexible times to follow up and enhanced pain management protocols. Yet, the key components of an RCT – random assignment and a control condition -- provide a stronger claim at causality than our QI test. Nonetheless, after completion, the intervention models of all RCTs get adapted to care contexts. Although both have strengths and weaknesses, both methodologies, QI and RCT could learn from the other. As statistical methods advance and as we are better able to take advantage of propensity scoring, stepped wedge designs, and pragmatic trials in RCTs, we may be better able to understand practice contexts and develop adaptive intervention protocols to leverage QI strengths yet make causality claims. QI may benefit from more rigorous documentation of protocol adaptations and use of single subject designs. Simultaneously engaging in both approaches also may have important advantages.
1. Davidoff F. Improvement interventions are social treatments, not pills. Ann Intern Med. 2014;161(7):526-527.
2. Szanton SL, Thorpe RJ, Boyd C, et al. Community aging in place, advancing better living for elders: A bio-behavioral-environmental intervention to improve function and health-related quality of life in disabled older adults. J Am Geriatr Soc. 2011;59(12):2314-2320.
Frank Davidoff, MD
November 20, 2014
Many serious improvers share Dr. Lynn’s frustration with the constraints that true experimental methods (mainly protocol-driven controlled trials) impose on the study of improvement work. That frustration can be eased somewhat by remembering the many opportunities other than formal research that are available for developing new knowledge about improvement: exchanging and exploring improvement data with fellow improvers, care providers, and patient groups; reviewing the work with hospital governing boards, accrediting agencies, and funders (1). Improvers and clinicians alike need to take full advantage of the learning and leverage that can come from these less-than-academic pursuits.The failure in many quarters to recognize the professional and intellectual value of improvement work noted by Dr Lynn is probably an even bigger problem, ditto its counterpart: improvers’ resistance to using academic tools such as theory in their work. The research and improvement communities have enormous amounts to learn from each other; we’re all poorer when each community is slow to accept what the other has to offer. Fortunately, judging from the many undergraduate and graduate programs in improvement appearing rapidly in the US and elsewhere, students, residents, and younger physicians in considerable numbers are beginning to take improvement seriously, as both an academic and a clinical discipline.Balancing the strengths of one methodology against the limitations of another is a thoroughly rational research strategy that Donald Campbell and Julian Stanley recommended a half century ago in their classic monograph on research design (2). Dr. Szanton and her colleagues have adopted this balanced approach in evaluating their improvement work; their efforts sound promising and very much worth exploring further, albeit this approach must also demand considerable extra time, effort, and funding. The power and visibility of controlled trials makes them a dominant element in drug development. Those trials are, however, just the last step in a convoluted and uncertain process involving discovery, testing, and scale-up, most of which is hidden from public view. Ironically, this long drawn-out process – with its mixture of theory and hunches, trial and error learning cycles, discovery and adaptation, confusion and reorientation (3) – provides a useful model for new and more sophisticated ways to evaluate complex improvement interventions. In these nuanced and eclectic proposals, particular evaluation methods are tailored to the sequential phases of improvement efforts (4,5).Controlled trials clearly play an important part in the evaluation of improvement work but, as in drug development, those trials are only a part of the overall picture; necessary but not sufficient for reaching a deep understanding of the nature and impact of these complex interventions.Word count: about 425 (text only)REFERENCES1. Solberg L, Mosser G, McDonald S. The three faces of performance measurement: improvement, accountability, and research. Joint Comm J Qual Improv 1997;23:135-47.2. Campbell DT, Stanley JC. Experimental and quasi-experimental designs for research. Boston; Houghton Mifflin Company, 1963.3. Hurley D. Chemical imbalance. Our high-tech process of pharmaceutical research is broken – and the solution might be old-fashioned trial and error. New York Times Magazine. November 16, 2014, pp. 61-73.4. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the New Medical Research Council guidance. BMJ 2008;337:a1655.5. Parry GJ, Carson-Stevens A, Luff DF, McPherson ME, Goldmann DA. Recommendations for evaluation of health care improvement initiatives. Acad Pediatr 2013;13(6 Suppl): S23-30.
to gain full access to the content and tools.
Learn more about subscription options.
Register Now for a free account.
Hospital Medicine, Healthcare Delivery and Policy.
Results provided by:
Copyright © 2016 American College of Physicians. All Rights Reserved.
Print ISSN: 0003-4819 | Online ISSN: 1539-3704
Conditions of Use
This PDF is available to Subscribers Only