Additional strengths of this study include our detailed knowledge of schedule changes and implementation at the hospital and the use of data sources beyond traditional administrative data. However, our study has several limitations to consider in its interpretation. The nonrandomized design limits conclusions about causality. Nonetheless, our inclusion of a control group and the consistent direction of the findings lend credence to the conclusions. Second, this was a single-institution study, although this institution's schedule changes were very similar to those made nationally (3). Third, we relied largely on administrative data, which may contain inaccuracies or omissions; do not provide sufficient information to determine whether care was appropriate, patient-centered, humane, technically proficient, efficient, or timely; and cannot provide a complete picture of adverse events (36). Fourth, our study was not powered to detect mortality effects because of the relatively small number of events. Fifth, we were able to examine only a limited number of outcomes. Other outcomes, such as diagnostic delays, might have been more affected by increased discontinuity. Finally, the design of our study assumes that without the work-hour regulation, teaching service patients would have had the same changes in outcomes as nonteaching service patients did. However, because patients were consistently assigned to teaching teams earlier in the day and nonteaching teams later in the day, some systematic bias in patient population may have caused outcomes on one service to change differently than the other over time, even without regulation. It is reassuring that when we removed the most notable bias—patients with ICU stays, who were primarily assigned to the teaching service—the results were essentially unchanged.