James A. Tulsky, MD; Robert M. Arnold, MD; Stewart C. Alexander, PhD; Maren K. Olsen, PhD; Amy S. Jeffreys, MStat; Keri L. Rodriguez, PhD; Celette Sugg Skinner, PhD; David Farrell, MPH; Amy P. Abernethy, MD; Kathryn I. Pollak, PhD
Acknowledgment: The authors thank the participating oncologists and their patients for allowing us to observe their most private encounters.
Grant Support: By grant R01 CA100387 from the National Cancer Institute.
Potential Conflicts of Interest: Disclosures can be viewed at www.acponline.org/authors/icmje/ConflictOfInterestForms.do?msNum=M11-0475.
Reproducible Research Statement:Study protocol: Available at http://clinicaltrials.gov/ct2/show/study/NCT00276627?term=NCT00276627 and www.cancer.gov/clinicaltrials/search/view?cdrid=452788&version=healthprofessional. Statistical code: Available from Dr. Olsen (e-mail, firstname.lastname@example.org). Data set: Not available.
Requests for Single Reprints: James A. Tulsky, MD, Center for Palliative Care, Duke University, Hock Plaza, Suite 1105, 2424 Erwin Road, Box 2720, Durham, NC 27705-3860; e-mail, email@example.com.
Current Author Addresses: Drs. Tulsky and Alexander: Center for Palliative Care, Duke University, Hock Plaza, Suite 1105, 2424 Erwin Road, Box 2720, Durham, NC 27705-3860.
Dr. Arnold: Section of Palliative Care and Medical Ethics, Division of General Internal Medicine, University of Pittsburgh Medical Center, 200 Lothrop Street, MUH, Suite W-919, Pittsburgh, PA 15213-2582.
Dr. Olsen and Ms. Jeffreys: Center for Health Services Research in Primary Care, Veterans Affairs Medical Center (152), 508 Fulton Street, Durham, NC 27705.
Dr. Rodriguez: Center for Health Equity Research and Promotion, Veterans Affairs Pittsburgh Healthcare System (646), 7180 Highland Drive (151C-H), Pittsburgh, PA 15206.
Dr. Sugg Skinner: University of Texas Southwestern Medical Center at Dallas, 5323 Harry Hines Boulevard, Dallas, TX 75390.
Mr. Farrell: People Designs, 1304 Broad Street, Durham, NC 27705.
Dr. Abernethy: Duke Cancer Care Research Program, 25165 Morris Building, Box 3436, Duke University Medical Center, Durham, NC 27710.
Dr. Pollak: Division of Cancer Control and Prevention, Duke Cancer Institute, Hock Plaza, Suite 602, 2424 Erwin Road, Durham, NC 27705.
Author Contributions: Conception and design: J.A. Tulsky, R.M. Arnold, S.C. Alexander, M.K. Olsen, C. Sugg Skinner, D. Farrell, A.P. Abernethy, K.I. Pollak.
Analysis and interpretation of the data: J.A. Tulsky, R.M. Arnold, S.C. Alexander, M.K. Olsen, A.S. Jeffreys, K.L. Rodriguez, A.P. Abernethy.
Drafting of the article: J.A. Tulsky, R.M. Arnold, S.C. Alexander, M.K. Olsen, A.S. Jeffreys, K.L. Rodriguez, K.I. Pollak.
Critical revision of the article for important intellectual content: J.A. Tulsky, M.K. Olsen, A.S. Jeffreys, C. Sugg Skinner, D. Farrell, A.P. Abernethy, K.I. Pollak.
Final approval of the article: J.A. Tulsky, S.C. Alexander, M.K. Olsen, A.S. Jeffreys, K.L. Rodriguez, D. Farrell, A.P. Abernethy, K.I. Pollak.
Provision of study materials or patients: J.A. Tulsky, K.L. Rodriguez, A.P. Abernethy.
Statistical expertise: M.K. Olsen, A.S. Jeffreys.
Obtaining of funding: J.A. Tulsky, M.K. Olsen, A.P. Abernethy, K.I. Pollak.
Administrative, technical, or logistic support: J.A. Tulsky, R.M. Arnold, A.S. Jeffreys, A.P. Abernethy K.I. Pollak.
Collection and assembly of data: J.A. Tulsky, R.M. Arnold, K.L. Rodriguez, C. Sugg Skinner, D. Farrell.
Tulsky J., Arnold R., Alexander S., Olsen M., Jeffreys A., Rodriguez K., Skinner C., Farrell D., Abernethy A., Pollak K.; Enhancing Communication Between Oncologists and Patients With a Computer-Based Training Program: A Randomized Trial. Ann Intern Med. 2011;155:593-601. doi: 10.7326/0003-4819-155-9-201111010-00007
Download citation file:
Published: Ann Intern Med. 2011;155(9):593-601.
Quality cancer care requires addressing patients' emotions, which oncologists infrequently do. Multiday courses can teach oncologists skills to handle emotion; however, such workshops are long and costly.
To test whether a brief, computerized intervention improves oncologist responses to patient expressions of negative emotion.
Randomized, controlled, parallel-group trial stratified by site, sex, and oncologic specialty. Oncologists were randomly assigned to receive a communication lecture or the lecture plus a tailored CD-ROM. (ClinicalTrials.gov registration number: NCT00276627)
Oncology clinics at a comprehensive cancer center and Veterans Affairs Medical Center in Durham, North Carolina, and a comprehensive cancer center in Pittsburgh, Pennsylvania.
48 medical, gynecologic, and radiation oncologists and 264 patients with advanced cancer.
Oncologists were randomly assigned in a 1:1 ratio to receive an interactive CD-ROM about responding to patients' negative emotions. The CD-ROM included tailored feedback on the oncologists' own recorded conversations.
Postintervention audio recordings were used to identify the number of empathic statements and responses to patients' expressions of negative emotion. Surveys evaluated patients' trust in their oncologists and perceptions of their oncologists' communication skills.
Oncologists in the intervention group used more empathic statements (relative risk, 1.9 [95% CI, 1.1 to 3.3]; P = 0.024) and were more likely to respond to negative emotions empathically (odds ratio, 2.1 [CI, 1.1 to 4.2]; P = 0.028) than control oncologists. Patients of intervention oncologists reported greater trust in their oncologists than did patients of control oncologists (estimated mean difference, 0.10 [CI, 0.007 to 0.19]; P = 0.036). There was no significant difference in perceptions of communication skills.
Long-term effects were not examined. The findings may not be generalizable outside of academic medical centers.
A brief computerized intervention improves how oncologists respond to patients' expressions of negative emotions.
National Cancer Institute.
Patients with cancer need more help with emotional concerns. Training courses for physicians can help them respond to the emotional concerns of patients with cancer, but they are time-intensive and costly.
The researchers incorporated physician–patient discussions into an interactive CD-ROM specific for each physician. Physicians spent an average of 1 hour with the material. One week later, patients had more trust in their doctors; 1 month later, physicians were still using more empathic statements when talking with patients and were more likely to respond appropriately to patients' negative comments.
This study was limited to academic settings and did not investigate long-term effects.
A brief intervention can help physicians manage the emotional concerns of patients with cancer.
Patients with advanced cancer have considerable distress (1-3). In addition to experiencing physical symptoms, they are frequently depressed (4), struggle with altered social roles, and live with anxiety (5). Oncologists can manage patient distress by recognizing and empathizing with patient concerns, which can lead to increases in satisfaction, adherence to treatment, and quality of life (6-7).
Oncologists frequently miss opportunities to respond to patient emotion and may instead exhibit behaviors that block feelings and create emotional distance (8-9). Courses for trainees and attending clinicians have been recently developed to address these shortcomings (10-14). These multiday courses incorporate skills training, small-group practice with simulated patients, observation, and feedback (15). Such courses are effective (16) but are often time- and cost-prohibitive. Improving oncologists' skills requires alternative educational venues that are easily accessible, not disruptive to clinical practice, inexpensive, and brief.
We designed a computerized, interactive, tailored intervention that meets these requirements. It enables oncologists to review their own audio-recorded encounters and provides suggestions on how to respond better to patients' negative emotions. We report the results of a randomized, controlled trial that tested the effectiveness of this intervention in improving oncologist behavior. A secondary objective was to evaluate the effect of the intervention on patients' perceptions of their oncologists.
The SCOPE (Studying Communication in Oncologist–Patient Encounters) Trial was a single-blind, randomized, controlled, parallel-group study performed at Duke University and Durham Veterans Affairs Medical Center, Durham, North Carolina, and University of Pittsburgh, Pittsburgh, Pennsylvania. First, we audio recorded clinic visits between participating oncologists and their patients with advanced cancer. These recordings were collected to serve as examples of communication behaviors that could be used to provide feedback in the subsequent intervention.
Once all baseline visits were recorded, the oncologists were stratified by balanced randomization in a 1:1 ratio by site (Durham or Pittsburgh), sex (men or women), and specialty (medical oncology, solid and liquid tumors; medical oncology, solid tumors only; malignant hematology, liquid tumors only; gynecologic oncology; or radiation oncology). The oncologists were then randomly assigned by using the minimization method (17) to receive either a standard lecture or the lecture and the intervention CD-ROM. The statistician conducted the randomization and revealed the oncologists' randomization status only to the project coordinator and principal investigators.
The lecture was delivered to all of the oncologists soon after all preintervention audio recordings were collected. The intervention CD-ROM was developed and tailored over the next year; oncologists assigned to the intervention group then received the CD-ROM. After these oncologists had 1 month to review the CD-ROM, their visits with a different set of patients were audio recorded. At approximately the same time, oncologists assigned to the control group also had a new set of visits with different patients recorded.
One week after these visits were recorded, we surveyed patients about their trust in their oncologists and the quality of communication. Consistent with a wait-list control design, after all data were collected the oncologists in the control group received copies of the intervention CD-ROM that were not tailored.
Forty-eight medical, gynecologic, and radiation oncologists (26 from Durham and 22 from Pittsburgh) recorded at least 4 clinic visits before randomization and qualified to be randomly assigned to the intervention or control group.
Our goal was to identify patients with sufficiently advanced disease to increase the probability that they would express negative emotions. We asked oncologists or their midlevel provider staff to identify patients whom they “would not be surprised if they died or were admitted to the intensive care unit (ICU) within one year” (18-20). Patients did not receive this information.
Eligible patients were also required to speak English, receive primary oncologic care at the study site, and have access to a telephone. We sent patients recruitment letters that were signed by their oncologists, and we met them before their recorded visit to obtain consent and conduct a baseline survey. The institutional review board of each participating institution approved this protocol.
All of the oncologists viewed a 1-hour lecture on communication skills delivered by one of the investigators. In addition, oncologists in the intervention group received a CD-ROM training program on communication skills that was tailored with exemplars from their own audio-recorded clinic visits. This intervention is described in detail elsewhere (21) and was designed to enhance oncologists' ability to respond effectively to patients' emotional concerns.
The intervention was based on social cognitive theory (22) and a barriers model proposed by Cabana and colleagues (23). It had 5 modules: principles of effective communication, recognizing empathic opportunities, responding to empathic opportunities, conveying prognosis, and answering difficult questions. A final module summarized main points from the intervention.
Each module was designed to be viewed in 10 to 15 minutes and followed a similar approach: The skill was introduced; 1 or 2 video clips demonstrated the skill; important teaching points were summarized; and users had the opportunity to review selected excerpts from their own recorded conversations, together with tailored feedback, to hear how they had communicated regarding that skill. Intervention oncologists also received complete audio recordings of each of their recorded patient visits.
Oncologists in the control group received no training beyond the 1-hour lecture. All participating oncologists received $100 after attending the lecture and were offered an additional $25 in gift certificates upon completion of audio recording their visits. Oncologists in the intervention group also received a pair of high-quality headphones.
Before recording their first encounter, oncologists were asked to self-report their sex, age, race, oncologic specialty, years practicing oncology, hours per week spent in patient care, and whether they perceived themselves as more oriented toward the socioemotional or technical aspects of medicine (24).
Before their clinic visit, patients were asked to self-report their sex, age, race, marital status, education, and economic security (that is, how much money is available after paying bills). Patients were blinded to their oncologists' study group.
Our primary outcome measures from the recordings were empathic statements and responses to empathic opportunities, both of which we referred to as emotion-handling skills. Empathic statements were defined as any 1 of 5 behaviors organized under the acronym NURSE: name, understand, respect, support, and explore (25-26). Empathic opportunities were defined according to Suchman and colleagues' (27) model of empathic communication in medical interviews as patients' expressions of negative emotions.
We coded oncologist responses to empathic opportunities as continuers, which comprised NURSE statements or “I wish” (28) statements, or terminators defined as changing the topic, joking, denying the emotion, or ending the conversation. Two independent, blinded coders were trained over 6 weeks. Both raters coded 15% of the audio recordings. Interrater reliability levels were high for the presence of empathic opportunities (κ statistic = 0.77) and responses (κ statistic = 0.74).
One week after the visit, patients completed the measures of trust, perceived empathy, therapeutic alliance, and perceived knowledge of the patient by a telephone survey.
Patients were asked 11 items to assess their trust in their oncologists (Cronbach α level = 0.80) (29). A sample item reads, “‘If my oncologist tells me something is so, then it must be true,’ (1 = Disagree to 5 = Agree).” An average of patient responses created a trust score.
Patients were asked 10 Likert scale items to assess perceived oncologist empathy (Cronbach α level = 0.95) (30). A sample item reads, “‘How was your oncologist at fully understanding your concerns?’ (1 = Not at all good to 5 = Extremely good).” The responses were averaged to create a perceived empathy score. In addition, we wrote 2 items that asked, “Compared to other doctors you've seen, how much did this oncologist show that he/she ... ‘cared about you’?” and “Compared to other doctors you've seen, how much did this oncologist show that he/she ... ‘understood you as a whole person’?” Potential responses were “(1 = Not at all to 5 = Extremely).”
Patients were asked 5 questions about their therapeutic alliance with their physician (Cronbach α level = 0.78) (31). A sample item reads, “‘I can easily talk about personal things with this doctor’ (1 = Disagree to 5 = Agree).” The mean value of the 5 responses was converted to a scale with a potential range of 0 to 100.
Patients were asked 4 questions to assess how well their oncologists knew them. A sample question reads, “‘How well would you rate ... [your] oncologist's knowledge of what worries you most about your health?' (1 = Very poor to 6 = Excellent)” (32). The responses were averaged to create a composite perceived knowledge score.
Estimation of the sample size was based on the hypothesis that oncologists in the intervention group would have a greater number of empathic responses than those in the control group. Because conversations with multiple patients were recorded for each oncologist, we incorporated a medium within-oncologist correlation coefficient of 0.3 into the calculation. Sample size and power estimates were generated by using the GEESIZE macro, version 9.1 (SAS Institute, Cary, North Carolina) (33). To detect a rate ratio of 1.5 with 90% power and a type I error rate of 5%, 200 patients (or 100 patients in each study group) were needed. For example, 24 oncologists would need 4 to 5 patients per oncologist.
A mixed-effect Poisson regression model was used to estimate the rate ratio of empathic statements per patient–physician conversation for the intervention group versus the control group (34). Predictors in the regression model included the intervention group, site, oncologists' sex, and oncologists' mean number of NURSE statements per conversation before the intervention. The unit of analysis was the conversation, so a random effect was included to account for the correlation of multiple conversations for each oncologist.
The other primary outcome variable was whether oncologists responded to an empathic opportunity with a continuer rather than a terminator. The analysis was limited to conversations that included at least 1 empathic opportunity. A logistic mixed-effect regression model estimated the probability of an oncologist in the intervention group using a continuer compared with an oncologist in the control group doing so (34).
Predictors in the model included the intervention group, site, and oncologists' sex. A single random effect was included to account for the correlation of multiple conversations for each oncologist. The effect of clustering to account for multiple empathic opportunities within 1 conversation was considered but not supported by the data (55% of the conversations included in the analysis had only 1 empathic opportunity).
Secondary outcomes included patients' perceptions of trust, empathy, therapeutic alliance, and knowledge. All measures were treated as continuous variables, except for the 2 single-question empathy items. Linear mixed-effect models were used to estimate the mean difference in perceptions between patients seen by intervention oncologists versus those seen by control oncologists. Models included intervention group, site, oncologists' sex, and a physician-level random effect.
The 2 single-question empathy items in which patients answered whether their oncologists “cared about [them]” and “understood [them] as a whole person” were dichotomized into a score of 5 (“[e]xtremely”) and less than 5 and were modeled by using logistic mixed-effect models. Again, models included intervention group, site, oncologists' sex, and a physician-level random effect. As a sensitivity analysis, the variables from the postvisit survey were multiply imputed with Markov chain Monte Carlo methods by using PROC MI software (SAS Institute). The imputed data were then analyzed with the same mixed-effect models described above, and the results were combined by using PROC MIANALYZE software (SAS Institute).
In addition, for all outcomes, marginal standardized estimates of the predicted proportion, count, or mean were calculated for both the intervention and control groups (35). These estimates represent the predictions averaged for covariates rather than fixing the covariates at specific values and are particularly useful in estimating relative risk from a multivariable model. They were derived by estimating the predicted values for all participants from the final model as if everyone were in the treatment group and the predicted values for all participants as if everyone were in the control group.
The predicted values were averaged across all participants and are referred to as the marginal standardized estimates. One thousand bootstrap samples were used to generate CIs for the marginal standardized estimates and relative risks. All analyses were conducted by using SAS software, version 9.2.
The study was funded by the National Cancer Institute. The funding agency had no role in the design, conduct, or analysis of the study or in the decision to submit the manuscript for publication.
Forty-eight oncologists participated in the trial; 24 were randomly assigned to each study group. A total of 264 encounters with 264 unique patients were recorded (Figure). Four of these encounters (3 from the control group and 1 from the intervention group) could not be coded because of technical problems with the audio recordings. Two hundred sixteen patients had at least a partial postvisit interview, and 202 patients completed all sections of the postvisit survey.
Oncologists in the 2 study groups were similar with regard to measured characteristics, except that a higher percentage of control oncologists were white (92% vs. 67%). The 135 patients in the intervention group and 129 patients in the control group were similar in age, sex, race, marital status, economic security, and length of relationship with their oncologists (Tables 1 and 2).
Twenty-one of the 24 oncologists in the intervention group used the SCOPE CD-ROM (21). The median recorded number of minutes spent logged on was 63.8 (interquartile range, 58.2 to 99.3; mean, 83.6 minutes [SD, 36.8]). Twenty of the 21 oncologists (95%) who used the CD-ROM reported that it influenced them to change their practice.
We measured 2 primary emotion-handling skills outcomes: the number of empathic statements and number of continuer responses to empathic opportunities. In audio recordings collected before the intervention, oncologists in the intervention group used a slightly greater number of empathic statements per conversation (mean, 0.4 [SD, 1.0]) than those in the control group (mean, 0.3 [SD, 0.7]). In postintervention audio recordings of oncologists in both study groups, the mean number of empathic statements per conversation increased (mean, 0.8 [SD, 1.3] in the intervention group vs. 0.4 [SD, 0.8] in the control group). This value increased more among oncologists in the intervention group (adjusted rate ratio, 1.9 [CI, 1.1 to 3.3]; P = 0.024) (Table 3).
With regard to responses to empathic opportunities, before the intervention, oncologists in both the intervention and control groups used similar numbers of continuers (28% vs. 27%, respectively). After the intervention, continuer use differed significantly between the groups, with intervention oncologists using continuers 34% of the time and control oncologists doing so 22% of the time. This difference was not caused by between-group differences in the number of patient expressions of negative emotions.
Sixty-one of the 129 (47%) conversations in the control group had at least 1 empathic opportunity compared with 74 of the 135 (55%) conversations in the intervention group (P = 0.22). The adjusted analyses of response to an empathic opportunity were limited to the 135 conversations with at least 1 empathic opportunity; these conversations included 275 empathic opportunities (range, 1 to 11 opportunities per conversation) (Table 4). Logistic mixed-effect regression analysis indicated that oncologists in the intervention group were significantly more likely than those in the control group to respond to empathic opportunities with continuers (odds ratio, 2.1 [CI, 1.1 to 4.2]; P = 0.028) (Table 3).
In addition to changing oncologist behavior, differences were also seen in patient trust (Table 3). In postintervention visits, patients whose oncologists were in the intervention group reported higher trust in their physicians (estimated mean difference, 0.1 [CI, 0.007 to 0.19]; P = 0.036) than patients whose oncologists did not receive the CD-ROM.
Patients in the intervention group also experienced greater perceived empathy from their oncologists (mean difference, 0.2 [CI, 0.0 to 0.4]; P = 0.058), as well as a greater sense that their oncologists understood them as “a whole person” (odds ratio, 1.6 [CI, 0.9 to 2.9]; P = 0.093). There were no differences in the other measures. Fourteen patient surveys were incomplete; results from sensitivity analyses with multiply imputed data were similar to the presented analyses.
We successfully implemented a 1-hour tailored, computer-based communication skills training program that oncologists could complete independently. In a randomized, controlled trial, the intervention improved both physician emotion-handling skills and patient trust. During discussions with patients, intervention oncologists demonstrated a 2-fold increase in empathic statements used in response to empathic opportunities.
This degree of behavior change among physicians is similar to that seen in intensive multiday courses that use small-group teaching (10-13). The SCOPE CD-ROM is the first computerized communication teaching intervention to show improvement in physician outcomes and the first physician communication intervention of any type to demonstrate improvement in patient trust.
Although experience can be an excellent teacher, physicians usually do not improve their communication skills without external input (13). In fact, in our study, the control oncologists performed slightly worse in the postintervention phase. To improve the quality of communication in medical encounters, more physicians should receive communication skills training that includes individualized, reflective feedback. Before this study, the gold standard was to provide such feedback in the context of a long course with small-group teaching. The brevity of our intervention offers an important advance over the small-group model.
This brief intervention probably worked for at least 3 reasons. First, it was grounded in a strong theoretical foundation and incorporated principles of adult learning. Second, it used structured feedback based on the physicians' own audio-recorded conversations, which allowed them to hear their own shortcomings and successes. Finally, we aimed to influence a limited number of skills: the intervention supplied oncologists with a handful of tools that they could remember and apply in patient encounters.
Another notable finding is the effect of the intervention on patient trust, which is associated with important clinical outcomes. For example, trust has been associated with both increased patient self-reported health and better glycemic control in patients with diabetes (36). Our study is the first to show that a communication intervention with physicians may improve patient-reported outcomes. For example, Shilling and colleagues (37) showed changes in physician behavior but no difference in patient satisfaction.
Our study serves as an initial proof of concept that a straightforward, interactive, computerized tool can change physician behavior. Once this technology is refined, it has multiple potential applications for physician training and quality improvement programs. It could reach a broad population of oncologists for relatively little cost, and similar programs could be designed for other specialties.
Physician–patient encounters could be easily recorded by using smartphones and related technology. Most of the feedback process could be automated, with only 1 hour of coding per encounter requiring human input. Brought to scale and with a moderate to large number of participating providers, such an intervention could be disseminated at a cost and an order of magnitude lower than that of existing residential teaching programs.
Our study has limitations. First, the effects found were measured shortly after the training and may not persist in the long term. Face-to-face training programs have demonstrated good retention up to 1 year after implementation (38). Future work must demonstrate whether effects from a computerized program are comparable.
Second, both study sites were academic medical centers and are perhaps not representative of community oncology practice. However, these physicians spend an average of 30 hours weekly seeing patients and have been in practice an average of 15 years. Furthermore, academic settings provide a considerable amount of cancer care.
Finally, some of the encounters were short visits for chemotherapy follow-up; therefore, oncologists had few opportunities for empathic responses. This fact shows the strength of the intervention, in that it was effective even in a setting without a high prevalence of empathic opportunities. In addition, oncologists who received the intervention may have tried harder to respond empathically when they were audio recorded. If so, this study still demonstrates what can be learned through a short intervention.
Although the intervention requires some sophistication to create the tailored feedback, we subsequently developed computerized “message libraries” that allow us to automate most of the feedback process. The intervention increased patient trust but did not cause a change in other patient outcomes; the reasons for this circumstance require further study.
Patients with cancer frequently bring emotional concerns into their medical encounters, and too often, oncologists are ill-equipped to respond appropriately. This randomized, controlled trial demonstrates that a brief, disseminable technology can improve oncologists' empathic behavior. This, in turn, improves patient trust, which may lead to better adherence to therapy and quality of life. Future research should evaluate the effect of such an approach on both oncologists and patients over time.
The In the Clinic® slide sets are owned and copyrighted by the American College of Physicians (ACP). All text, graphics, trademarks, and other intellectual property incorporated into the slide sets remain the sole and exclusive property of the ACP. The slide sets may be used only by the person who downloads or purchases them and only for the purpose of presenting them during not-for-profit educational activities. Users may incorporate the entire slide set or selected individual slides into their own teaching presentations but may not alter the content of the slides in any way or remove the ACP copyright notice. Users may make print copies for use as hand-outs for the audience the user is personally addressing but may not otherwise reproduce or distribute the slides by any means or media, including but not limited to sending them as e-mail attachments, posting them on Internet or Intranet sites, publishing them in meeting proceedings, or making them available for sale or distribution in any unauthorized form, without the express written permission of the ACP. Unauthorized use of the In the Clinic slide sets will constitute copyright infringement.
In this video, James A. Tulsky, MD, offers additional insight into his original research article, "Enhancing Communication Between Oncologists and Patients With a Computer-Based Training Program: A Randomized
Results provided by:
Copyright © 2016 American College of Physicians. All Rights Reserved.
Print ISSN: 0003-4819 | Online ISSN: 1539-3704
Conditions of Use
This PDF is available to Subscribers Only