0
Academia and the Profession |

Publication Guidelines for Improvement Studies in Health Care: Evolution of the SQUIRE Project FREE

Frank Davidoff, MD; Paul Batalden, MD; David Stevens, MD; Greg Ogrinc, MD, MS; Susan Mooney, MD, MS, SQUIRE Development Group
[+] Article and Author Information

Members of the SQUIRE Development Group who provided input during the development process and endorsed the SQUIRE guidelines are listed in the Appendix.


From the Institute for Healthcare Improvement, Cambridge, Massachusetts, and the Dartmouth Institute for Health Policy and Clinical Practice Center for Leadership and Improvement, Lebanon, New Hampshire.


Some of the work reported in this article was done at the SQUIRE Advisory Committee Meeting, Cambridge, Massachusetts, 35 April 2007.

Note: A slightly different version of this article is being published in Quality and Safety in Health Care, 2008;17(Suppl 1):i3-10, as well as in other journals. This article is therefore not copyrighted and may be freely reproduced and distributed.

Acknowledgment: The authors thank Rosemary Gibson and Laura Leviton for their unflagging support of this project, the Institute for Healthcare Improvement for their gracious help in hosting the review meeting in Cambridge, and Joy McAvoy for her invaluable administrative work in coordinating the entire development process.

Grant Support: The SQUIRE project was supported in part by grant 58073 from the Robert Wood Johnson Foundation.

Potential Financial Conflicts of Interest: None disclosed.

Requests for Single Reprints: Frank Davidoff, MD, 143 Garden Street, Wethersfield, CT 06109; e-mail, fdavidoff@cox.net.

Current Author Addresses: Dr. Davidoff: 143 Garden Street, Wethersfield, CT 06109.

Dr. Batalden and Stevens: 30 Lafayette Street, Lebanon, NH 03766.

Dr. Ogrinc: 215 North Main Street (170), White River Junction, VT 05009.

Dr. Mooney: Alice Peck Day Memorial Hospital, 125 Mascoma Street, Lebanon, NH 03766.


Ann Intern Med. 2008;149(9):670-676. doi:10.7326/0003-4819-149-9-200811040-00009
Text Size: A A A

In 2005, draft guidelines were published for reporting studies of quality improvement as the initial step in a consensus process for development of a more definitive version. The current article contains the revised version, which we refer to as Standards for QUality Improvement Reporting Excellence (SQUIRE). This narrative progress report summarizes the special features of improvement that are reflected in SQUIRE and describes major differences between SQUIRE and the initial draft guidelines. It also explains the development process, which included formulation of responses to informal feedback, written commentaries, and input from publication guideline developers; ongoing review of the literature on the epistemology of improvement and methods for evaluating complex social programs; and a meeting of stakeholders for critical review of the guidelines' content and wording, followed by commentary on sequential versions from an expert consultant group. Finally, the report discusses limitations of and unresolved questions about SQUIRE; ancillary supporting documents and alternative versions under development; and plans for dissemination, testing, and further development of SQUIRE.

A great deal of meaningful and effective work is now done in clinical settings to improve the quality and safety of care. Unfortunately, relatively little of that work is reported in the biomedical literature, and much of what is published could be described more effectively. Failure to publish is potentially a serious barrier to the development of improvement science, because public sharing of concepts, methods, and findings is essential to the progress of all scientific work, both theoretical and applied. To help strengthen the evidence base for improvement in health care, we proposed draft guidelines for reporting planned original studies of improvement interventions in 2005 (1). Our aims were to stimulate the publication of high-caliber improvement studies and to increase the completeness, accuracy, and transparency of published reports of that work.

Our initial draft guidelines were based largely on the authors' personal experience with improvement work and were intended only as an initial step toward creation of recognized publication standards. We have now refined and extended that draft, and present here the resulting revised version, which we refer to as the Standards for QUality Improvement Reporting Excellence, or SQUIRE (Table). In this narrative progress report, we describe the special features of quality improvement that are reflected in SQUIRE and examine the major differences between SQUIRE and the initial draft guidelines. We also outline the consensus process used to develop SQUIRE, including our responses to critical feedback obtained during that process. Finally, we consider the limitations of and questions about SQUIRE; describe ancillary supporting documents and various versions currently under development; and explain plans for dissemination, testing, and further development of the SQUIRE guidelines.

Table Jump PlaceholderTable.  Standards for Quality Improvement Reporting Excellence

Unlike conceptually neat and procedurally unambiguous interventions, such as drugs, tests, and procedures, that directly affect the biology of disease and are the objects of study in most clinical research, improvement is essentially a social process. Improvement is an applied science rather than an academic discipline (2); its immediate purpose is to change human performance rather than generate new, generalizable knowledge (3), and it is driven primarily by experiential learning (45). Like other social processes, improvement is inherently context-dependent; it is reflexive, meaning that improvement interventions are repeatedly modified in response to outcome feedback, with the result that both its interventions and outcomes are relatively unstable; and it generally involves complex, multicomponent interventions. Although traditional experimental and quasi-experimental methods are important for learning whether improvement interventions change behavior, they do not provide appropriate and effective methods for addressing the crucial pragmatic (or realist) questions about improvement that are derived from its complex social nature: What is it about the mechanism of a particular intervention that works, for whom does it work, and under what circumstances(23, 6)?

Using combinations of methods that answer both the experimental and pragmatic questions is not an easy task, because those 2 contrasting methodologies can sometimes work at cross-purposes. For example, true experimental studies are designed to minimize the confounding effects of context, such as the impact of the heterogeneity of local settings, staff and other study participants, resources, and culture, on measured outcomes. But trying to control context out of improvement interventions is both inappropriate and counterproductive because improvement interventions are inherently and strongly context-dependent (23). Similarly, true experimental studies require strict adherence to study protocols because it reduces the impact of many potential confounders. But rigid adherence to initial improvement plans is incompatible with an essential element of improvement, which is continued modification of those plans in response to outcome feedback (reflexiveness). We have attempted to maintain a balance between experimental and pragmatic (or realist) methodologies in the SQUIRE guidelines; both appear to us to be important and necessary, and they are mutually complementary.

The SQUIRE guidelines differ in several important ways from the initial draft guidelines. First, as noted, SQUIRE highlights more explicitly the essential and unique properties of improvement interventions, particularly their social nature, focus on changing performance, context-dependence, complexity, nonlinearity, adaptation, and iterative modification based on outcome feedback (reflexiveness). Second, SQUIRE distinguishes more clearly between improvement practice (planning and implementing improvement interventions) and the evaluation of improvement projects (designing and carrying out studies to assess whether those interventions work, and why they do or do not work). Third, SQUIRE now explicitly specifies elements of study design that make it possible to assess both whether improvement interventions work (by minimizing bias and confounding) and why interventions are or are not effective (by identifying the effects of context and identifying mechanisms of change). And finally, SQUIRE explicitly addresses the often-confusing ethical dimensions of improvement projects and improvement studies (78). Other differences between SQUIRE and the draft guidelines are available at http://www.squire-statement.org.

The SQUIRE development process was designed to produce consensus among a broad constituency of experts and users on both the content and format of guideline items. It proceeded along the following 6 lines. We first obtained informal feedback on the utility, strengths, and limitations of the draft guidelines from potential authors in a series of seminars at national and international meetings, as well as from experienced publication guideline developers at the organizational meeting of the EQUATOR network (9). Second, authors, peer reviewers, and journal editors road tested the draft guidelines as a working tool for editing and revising submitted manuscripts (1011). Third, we solicited and published written commentaries on the initial version of the guidelines (1216). Fourth, we conducted an ongoing literature review on epistemology, methodology, and the evaluation of complex interventions, particularly in social sciences. Fifth, in April 2007, we subjected the draft guidelines to intensive analysis, comment, and recommendations for change at a 2-day meeting of 30 stakeholders. Finally, following that meeting, we obtained further critical appraisal of the guidelines through 3 cycles of a Delphi process that involved an international group of more than 50 consultants.

Informal input about the draft guidelines from authors and peer reviewers raised 4 relevant issues: uncertainty as to which studies the guidelines apply; the possibility that their use might force quality improvement reports into a rigid, narrow format; the concern that their slavish application might result in lengthy and unreadable reports that are indiscriminately laden with detail; and difficulty knowing if, when, and how other publication guidelines should be used in conjunction with guidelines for reporting quality improvement studies.

Publications on improvement in health care are emerging in 4 general categories: empirical studies on the effectiveness of quality improvement interventions; stories, theories, and frameworks; literature reviews and syntheses; and the development and testing of improvement-related tools and methods (Rubenstein L et al. Unpublished data.). Our guideline development process has made it clear that the SQUIRE guidelines can and should apply to reports in the first category: original, planned studies of interventions that are designed to improve clinical outcomes by delivering clinically proven care measures more appropriately, effectively, and efficiently.

Publication guidelines are often referred to as checklists because, like other such documents, they serve as aide-mmoires, which have proven increasingly valuable in managing information in complex systems (17). Rigid or mechanical application of checklists can prevent users from making sense of complex information (1819). At the same time, however (and paradoxically), checklists, like all constraints and reminders, can serve as important drivers for creativity. The SQUIRE guidelines must therefore always be understood and used as signposts, not shackles (20).

Improvement is a complex undertaking, and its evaluation can produce substantial amounts of qualitative and quantitative information. Adding irrelevant information simply to cover guideline items would be counterproductive; on the other hand, added length that makes reports of improvement studies more complete, coherent, usable, and systematic helps the guidelines meet a principal aim of SQUIRE. Publishing portions of improvement studies in electronic form only can make the content of long articles publicly available while conserving space in print publication.

Most other biomedical publication guidelines are designed to improve the reporting of studies that use specific experimental designs. The SQUIRE guidelines, in contrast, are concerned with the reporting of studies in a defined content areaimprovement and safety. These 2 guideline types are therefore complementary, rather than redundant or conflicting. When appropriate, other specific design-related guidelines can and should be used in conjunction with SQUIRE.

The written commentaries provided both supportive and critical input on the draft guidelines (1216). One suggested that the guidelines' pragmatic focus was an important complement to guidelines for reporting traditional experimental clinical science (12). The guidelines were also seen as a potentially valuable instrument for strengthening the design and conduct of improvement research, resulting in greater synergy with improvement practice (15), and increasing the feasibility of combining improvement studies in systematic reviews. However, other commentaries on the draft guidelines raised concerns: that they were inattentive to racial and ethnic disparities in care (14); that their proposed Introduction, Methods, Results, and Discussion (IMRaD) structure might be incompatible with the reality that improvement interventions are designed to change over time (13); and that their use could result in a dumbing down of improvement science (16). Our responses to these concerns are as follows.

Health Disparities

We do not believe it would be useful, even if it were possible, to address every relevant content issue in a concise set of quality improvement reporting guidelines. We do agree, however, that disparities in care are not considered often enough in improvement work, and that improvement initiatives should address this important issue whenever possible. We have therefore highlighted this issue in the SQUIRE guidelines (Table, item 13.b.i).

The IMRaD Structure

The study protocols traditionally described in the Methods section of clinical trials are rigidly fixed, as required by the dictates of experimental design (21). In contrast, improvement is a reflexive learning process; that is, improvement interventions are most effective when they are modified in response to outcome feedback. On these grounds, it has been suggested that reporting improvement interventions in the IMRaD format logically requires multiple, sequential pairs of Methods and Results sections, one pair for each iteration of the evolving intervention (13). We maintain, however, that the changing, reflexive nature of improvement does not exempt improvement studies from answering the 4 fundamental questions required in all scholarly inquiry: Why did you start? What did you do? What did you find? What does it mean? These same questions define the 4 elements of the IMRaD framework (2223). Although some authors and editors might understandably choose to use a modified IMRaD format that involves a series of small, sequential Methods plus Results sections, we believe that approach is often both unnecessary and confusing. We therefore continue to support describing the initial improvement plan, and the theory (mechanism) on which it is based, in a single Methods section. Because the changes in interventions over time and the learning that comes from making those changes are themselves important outcomes in improvement projects, in our view they belong collectively in a single Results section (1).

Dumbing Down Improvement Reports

The declared purpose of all publication guidelines is to improve the completeness and transparency of reporting. Because it is precisely these characteristics of reporting that make it possible to detect weak, sloppy, or poorly designed studies, it is difficult to understand how use of the draft guidelines might lead to a dumbing down of improvement science. The underlying concern here therefore appears to have less to do with transparency than with the inference that the draft guidelines failed to require sufficiently rigorous standards of evidence (16, 21). We recognize that those traditional experimental standards are powerful instruments for protecting the integrity of outcome measurements, largely by minimizing selection bias (21, 24). Although those standards are necessary in improvement studies, they are not sufficient because they fail to take into account the particular epistemology of improvement that derives from its applied purpose and social nature. As noted, the SQUIRE guidelines specify methodologies that are appropriate for both experimental and pragmatic (or realist) evaluation of improvement programs.

With support from the Robert Wood Johnson Foundation, we undertook an intensive critical appraisal of the draft guidelines at a 2-day meeting in April 2007. Thirty participants attended, including clinicians, improvement professionals, epidemiologists, clinical researchers, and journal editors, several from outside the United States. Before the meeting, we sent participants a reading list and a concept paper on the epistemology of improvement. In plenary and small group sessions, participants critically discussed and debated the content and wording of every item in the draft guidelines and recommended changes. They also provided input on plans for dissemination, adoption, and future uses of the guidelines. Working from transcribed audiorecordings of all meeting sessions and flip charts listing the key discussion points, a coordinating group (the authors of this article) then revised, refined, and expanded the draft guidelines.

After the consensus meeting, we circulated sequential revisions of the guidelines for further comment and suggestions in 3 cycles of a Delphi process. The group involved in that process included the meeting participants and roughly 20 additional expert consultants. We then surveyed all participants as to their willingness to endorse the final consensus version (SQUIRE).

The SQUIRE guidelines have been characterized as providing both too little and too much information: too little, because they fail to represent adequately the many unique and nuanced issues in the practice and evaluation of improvement (24, 1216, 21, 2425); too much, because the detail and density of the item descriptions might seem intimidating to authors. We recognize that the SQUIRE item descriptions are significantly more detailed than those of some other publication guidelines. In our view, however, the complexity of the improvement process, plus the relative unfamiliarity of improvement interventions and of the methods for evaluating them, justify that level of detail, particularly in light of the diverse backgrounds of people working to improve health care. Moreover, the level of detail in the SQUIRE guidelines is quite similar to that of recently published guidelines for reporting observational studies, which also involve considerable complexities of study design (26). To increase the usability of SQUIRE, we plan to make available a shortened electronic version on the SQUIRE Web site, accompanied by a glossary of terms used in the item descriptions that may be unfamiliar to users.

Authors' interest in using publication guidelines increases when journals make them part of the peer review and editorial process. We therefore encourage the widest possible use of the SQUIRE guidelines by editors. Unfortunately, little is known about the most effective ways to apply publication guidelines in practice. Therefore, editors have been forced to learn from experience how to use other publication guidelines, and the specifics of their use vary widely from journal to journal. We also lack systematic knowledge of how authors can use publication guidelines most productively. Our experience suggests, however, that SQUIRE is most helpful if authors simply keep the general content of the guideline items in mind as they write their initial drafts, then refer to the details of individual items as they critically appraise what they have written during the revision process. The question of how publication guidelines can be used most effectively appears to us to be an empirical one, and therefore we strongly encourage editors and authors to collect, analyze, and report their experiences in using SQUIRE and other publication guidelines.

A SQUIRE explanation and elaboration document is being published elsewhere (27). Like other such documents (2831), this document provides much of the necessary depth and detail that cannot be included in a set of concise guideline items. It presents the rationale for including each guideline item in SQUIRE, along with published examples of reporting for each item, and commentary on the strengths and weaknesses of those examples.

The SQUIRE Web site (http://www.squire-statement.org) will provide an authentic electronic home for the guidelines and a medium for their progressive refinement. We also intend the site to serve as an interactive electronic community for authors, students, teachers, reviewers, and editors who are interested in the emerging body of scholarly and practical knowledge on improvement.

Although the primary purpose of SQUIRE is to enhance the reporting of improvement studies, we believe the guidelines can also be useful for educational purposes, particularly for understanding and exploring further the epistemology of improvement, and the methodologies for evaluating improvement work. We believe, similarly, that SQUIRE can help in planning and executing improvement interventions, carrying out studies of those interventions, and developing skill in writing about improvement. We encourage these uses, as well as efforts to assess SQUIRE's impact on the completeness and transparency of published improvement studies (3233) and to obtain empirical evidence that individual guideline items contribute materially to the value of published information in improvement science.

Davidoff F, Batalden P.  Toward stronger evidence on quality improvement. Draft publication guidelines: the beginning of a consensus project. Qual Saf Health Care. 2005; 14:319-25. PubMed
CrossRef
 
Walshe K, Freeman T.  Effectiveness of quality improvement: learning from evaluations. Qual Saf Health Care. 2002; 11:85-7. PubMed
 
Pawson R, Greenhalgh T, Harvey G, Walshe K.  Realist reviewa new method of systematic review designed for complex policy interventions. J Health Serv Res Policy. 2005; 10:Suppl 121-34. PubMed
 
Batalden P, Davidoff F.  Teaching quality improvement: the devil is in the details [Editorial]. JAMA. 2007; 298:1059-61. PubMed
 
Kolb DA.  Experiential Learning. Experience as the Source of Learning and Development. Englewood Cliffs, NJ: Prentice-Hall; 1984.
 
Pawson R, Tilley N.  Realistic Evaluation. Thousand Oaks, CA: Sage Publications; 1997.
 
. Jennings B, Baily MA, Bottrell M, Lynn J Health Care Quality Improvement: Ethical and Regulatory Issues. Garrison, NY: The Hastings Center; 2007.
 
Lynn J, Baily MA, Bottrell M, Jennings B, Levine RJ, Davidoff F, .  The ethics of using quality improvement methods in health care. Ann Intern Med. 2007; 146:666-73. PubMed
 
EQUATOR Network.  Enhancing the QUality And Transparency Of health Research. Accessed athttp://www.equator-network.org. on 9 May 2008.
 
Janisse T.  A next step: reviewer feedback on quality improvement publication guidelines. Permanente Journal. 2007; 11:1.
 
Guidelines for authors: guidelines for submitting more extensive quality research. Quality and Safety in Health Care. Accessed athttp://qshc.bmj.com/ifora/article_type.dtl#extensiveon 26 September 2008.
 
Berwick DM.  Broadening the view of evidence-based medicine. Qual Saf Health Care. 2005; 14:315-6. PubMed
 
Thomson RG.  Consensus publication guidelines: the next step in the science of quality improvement? Qual Saf Health Care. 2005; 14:317-8. PubMed
 
Chin MH, Chien AT.  Reducing racial and ethnic disparities in health care: an integral part of quality improvement scholarship. Qual Saf Health Care. 2006; 15:79-80. PubMed
 
Baker GR.  Strengthening the contribution of quality improvement research to evidence based health care. Qual Saf Health Care. 2006; 15:150-1. PubMed
 
Pronovost P, Wachter R.  Proposed standards for quality improvement research and publication: one step forward and two steps back. Qual Saf Health Care. 2006; 15:152-3. PubMed
 
Gawande A.  The checklist: if something so simple can transform intensive care, what else can it do? New Yorker. 2007; 86-101. PubMed
 
Rennie D.  Reporting randomized controlled trials. An experiment and a call for responses from readers [Editorial]. JAMA. 1995; 273:1054-5. PubMed
 
Williams JW Jr, Holleman DR Jr, Samsa GP, Simel DL.  Randomized controlled trial of 3 vs 10 days of trimethoprim/sulfamethoxazole for acute maxillary sinusitis. JAMA. 1995; 273:1015-21. PubMed
 
Rutledge A.  On Creativity. Accessed at http://www.alistapart/articles/oncreativity on 9 May 2008.
 
Shadish WR, Cook TD, Campbell DT.  Experimental and Quasi-Experimental Designs for Generalized Causal Inference. New York: Houghton-Mifflin; 2002.
 
Day RA.  The origins of the scientific paper: the IMRaD format. Journal of the American Medical Writers Association. 1989; 4:16-8.
 
Huth EJ.  The research paper: general principles for structure and content.  Writing and Publishing in Medicine. 3rd ed. Philadelphia: Williams & Wilkins; 1999; 63-73.
 
Jadad AR, Enkin MW.  Randomized Controlled Trials. Questions, Answers, and Musings. 2nd ed. London: Blackwell Publishing/BMJ Books; 2007.
 
Glouberman S, Zimmerman B.  Complicated and complex systems: What would successful reform of medicine look like?,. PG Forest, T Mckintosh, and G Marchilden Health Care Services and the Process of Change. Toronto: Univ of Toronto Pr; 2004.
 
STROBE Initiative.  The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Ann Intern Med. 2007; 147:573-7. PubMed
 
Ogrinc G, Mooney SE, Estrada C, Foster T, Hall LW, Huizinga MM, .  The SQUIRE (Standards for Quality Improvement Reporting Excellence) guidelines for quality improvement reporting: explanation and elaboration. Qual Saf Health Care. 2008; 17:Suppl 1i13-132.
 
CONSORT GROUP (Consolidated Standards of Reporting Trials).  The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med. 2001; 134:663-94. PubMed
 
Standards for Reporting of Diagnostic Accuracy.  The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Clin Chem. 2003;49:7-18. [PMID: 12507954]
 
STROBE Initiative.  Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): explanation and elaboration. Ann Intern Med. 2007; 147:W163-94. PubMed
 
Keech A, Gebski V, Pike R.  Interpreting and Reporting Clinical Trials. A Guide to the CONSORT Statement and the Principles of Randomized Trials. Sydney: Australian Medical Publishing; 2007.
 
Plint AC, Moher D, Morrison A, Schulz K, Altman DG, Hill C, .  Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust. 2006; 185:263-7. PubMed
 
CONSORT Group (Consolidated Standards of Reporting of Trials).  Value of flow diagrams in reports of randomized controlled trials. JAMA. 2001; 285:1996-9. PubMed
 
Appendix
Contributors

The following people contributed critical input to the guidelines during their development: Kay Dickersin, Donald Goldmann, Peter Goetzsche, Gordon Guyatt, Hal Luft, Kathryn McPherson, Victor Montori, Dale Needham, Duncan Neuhauser, Kaveh Shojania, Vincenza Snow, Ed Wagner, Val Weber.

Endorsement

The following participants in the consensus process also provided critical input on the guidelines, and endorsed the final version. Their endorsements are personal, and do not imply endorsement by any group, organization, or agency: David Aron, Virginia Barbour, Jesse Berlin, Steven Berman, Donald Berwick, Maureen Bisognano, Andrew Booth, Isabelle Boutron, Peter Buerhaus, Marshall Chin, Benjamin Crabtree, Linda Cronenwett, Mary Dixon-Woods, Brad Doebbling, Denise Dougherty, Martin Eccles, Susan Ellenberg, William Garrity, Lawrence Green, Trisha Greenhalgh, Linda Headrick, Susan Horn, Julie Johnson, Kate Koplan, David Korn, Uma Kotegal, Seth Landefield, Elizabeth Loder, Joanne Lynn, Susan Mallett, Peter Margolis, Diana Mason, Don Minckler, Brian Mittman, Cynthia Mulrow, Eugene Nelson, Paul Plsek, Peter Pronovost, Lloyd Provost, Philippe Ravaud, Roger Resar, Jane Roessner, John-Arne Rttingen, Lisa Rubenstein, Harold Sox, Ted Speroff, Richard Thomson, Erik von Elm, Elizabeth Wager, Doug Wakefield, Bill Weeks, Hywel Williams, Sankey Williams.

Figures

Tables

Table Jump PlaceholderTable.  Standards for Quality Improvement Reporting Excellence

References

Davidoff F, Batalden P.  Toward stronger evidence on quality improvement. Draft publication guidelines: the beginning of a consensus project. Qual Saf Health Care. 2005; 14:319-25. PubMed
CrossRef
 
Walshe K, Freeman T.  Effectiveness of quality improvement: learning from evaluations. Qual Saf Health Care. 2002; 11:85-7. PubMed
 
Pawson R, Greenhalgh T, Harvey G, Walshe K.  Realist reviewa new method of systematic review designed for complex policy interventions. J Health Serv Res Policy. 2005; 10:Suppl 121-34. PubMed
 
Batalden P, Davidoff F.  Teaching quality improvement: the devil is in the details [Editorial]. JAMA. 2007; 298:1059-61. PubMed
 
Kolb DA.  Experiential Learning. Experience as the Source of Learning and Development. Englewood Cliffs, NJ: Prentice-Hall; 1984.
 
Pawson R, Tilley N.  Realistic Evaluation. Thousand Oaks, CA: Sage Publications; 1997.
 
. Jennings B, Baily MA, Bottrell M, Lynn J Health Care Quality Improvement: Ethical and Regulatory Issues. Garrison, NY: The Hastings Center; 2007.
 
Lynn J, Baily MA, Bottrell M, Jennings B, Levine RJ, Davidoff F, .  The ethics of using quality improvement methods in health care. Ann Intern Med. 2007; 146:666-73. PubMed
 
EQUATOR Network.  Enhancing the QUality And Transparency Of health Research. Accessed athttp://www.equator-network.org. on 9 May 2008.
 
Janisse T.  A next step: reviewer feedback on quality improvement publication guidelines. Permanente Journal. 2007; 11:1.
 
Guidelines for authors: guidelines for submitting more extensive quality research. Quality and Safety in Health Care. Accessed athttp://qshc.bmj.com/ifora/article_type.dtl#extensiveon 26 September 2008.
 
Berwick DM.  Broadening the view of evidence-based medicine. Qual Saf Health Care. 2005; 14:315-6. PubMed
 
Thomson RG.  Consensus publication guidelines: the next step in the science of quality improvement? Qual Saf Health Care. 2005; 14:317-8. PubMed
 
Chin MH, Chien AT.  Reducing racial and ethnic disparities in health care: an integral part of quality improvement scholarship. Qual Saf Health Care. 2006; 15:79-80. PubMed
 
Baker GR.  Strengthening the contribution of quality improvement research to evidence based health care. Qual Saf Health Care. 2006; 15:150-1. PubMed
 
Pronovost P, Wachter R.  Proposed standards for quality improvement research and publication: one step forward and two steps back. Qual Saf Health Care. 2006; 15:152-3. PubMed
 
Gawande A.  The checklist: if something so simple can transform intensive care, what else can it do? New Yorker. 2007; 86-101. PubMed
 
Rennie D.  Reporting randomized controlled trials. An experiment and a call for responses from readers [Editorial]. JAMA. 1995; 273:1054-5. PubMed
 
Williams JW Jr, Holleman DR Jr, Samsa GP, Simel DL.  Randomized controlled trial of 3 vs 10 days of trimethoprim/sulfamethoxazole for acute maxillary sinusitis. JAMA. 1995; 273:1015-21. PubMed
 
Rutledge A.  On Creativity. Accessed at http://www.alistapart/articles/oncreativity on 9 May 2008.
 
Shadish WR, Cook TD, Campbell DT.  Experimental and Quasi-Experimental Designs for Generalized Causal Inference. New York: Houghton-Mifflin; 2002.
 
Day RA.  The origins of the scientific paper: the IMRaD format. Journal of the American Medical Writers Association. 1989; 4:16-8.
 
Huth EJ.  The research paper: general principles for structure and content.  Writing and Publishing in Medicine. 3rd ed. Philadelphia: Williams & Wilkins; 1999; 63-73.
 
Jadad AR, Enkin MW.  Randomized Controlled Trials. Questions, Answers, and Musings. 2nd ed. London: Blackwell Publishing/BMJ Books; 2007.
 
Glouberman S, Zimmerman B.  Complicated and complex systems: What would successful reform of medicine look like?,. PG Forest, T Mckintosh, and G Marchilden Health Care Services and the Process of Change. Toronto: Univ of Toronto Pr; 2004.
 
STROBE Initiative.  The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Ann Intern Med. 2007; 147:573-7. PubMed
 
Ogrinc G, Mooney SE, Estrada C, Foster T, Hall LW, Huizinga MM, .  The SQUIRE (Standards for Quality Improvement Reporting Excellence) guidelines for quality improvement reporting: explanation and elaboration. Qual Saf Health Care. 2008; 17:Suppl 1i13-132.
 
CONSORT GROUP (Consolidated Standards of Reporting Trials).  The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med. 2001; 134:663-94. PubMed
 
Standards for Reporting of Diagnostic Accuracy.  The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Clin Chem. 2003;49:7-18. [PMID: 12507954]
 
STROBE Initiative.  Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): explanation and elaboration. Ann Intern Med. 2007; 147:W163-94. PubMed
 
Keech A, Gebski V, Pike R.  Interpreting and Reporting Clinical Trials. A Guide to the CONSORT Statement and the Principles of Randomized Trials. Sydney: Australian Medical Publishing; 2007.
 
Plint AC, Moher D, Morrison A, Schulz K, Altman DG, Hill C, .  Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust. 2006; 185:263-7. PubMed
 
CONSORT Group (Consolidated Standards of Reporting of Trials).  Value of flow diagrams in reports of randomized controlled trials. JAMA. 2001; 285:1996-9. PubMed
 

Letters

NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Comments

Submit a Comment
Submit a Comment

Summary for Patients

Clinical Slide Sets

Terms of Use

The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.

Toolkit

Want to Subscribe?

Learn more about subscription options

Advertisement
Topic Collections
Forgot your password?
Enter your username and email address. We'll send you a reminder to the email address on record.
(Required)
(Required)