The problem. The health care field has adopted techniques from industry, such as continuous quality improvement (CQI), to improve patient care processes and outcomes. However, CQI in the health care field has come to mean different things to different audiences. Information about quality improvement needs to be defined, measured, and reported in a consistent manner, in order for improvement practitioners and researchers to use it.
An early interest in research. Heather C. Kaplan, MD, MSCE, a neonatologist by training, followed a straight trajectory from college to Northwestern University School of Medicine, a pediatric residency at Children's National Medical Center in Washington, and a fellowship in neonatal-perinatal medicine at the Children's Hospital of Philadelphia.
But research turned out to be Kaplan's ultimate calling. While at the Children's Hospital of Philadelphia, Kaplan completed a master's program in clinical epidemiology at the University of Pennsylvania School of Medicine, Center for Epidemiology & Biostatistics. Her epidemiology graduate work focused on using measures of variation as markers of quality.
"I've always been very interested in variation, especially variation in outcomes," Kaplan said. "During my clinical training, I noticed how people practiced so differently, both when evidence is present and in the absence of evidence. I have always been fascinated by that."
After completing her master's and her fellowship in 2007, she headed for Cincinnati Children's Hospital Medical Center, "enamored by the work going on there in quality improvement." Kaplan wanted to move beyond documenting and measuring variation and attributing it to quality differences to actually using the information to improve quality of care.
Studying context with support from RWJF. As a grantee of RWJF's Pursuing Perfection: Raising the Bar for Health Care Performance national program between 2002 and 2006, Cincinnati Children's had effectively improved quality in several areas. Yet, even in supportive environments, quality improvement projects vary greatly in the results they are able to achieve, Kaplan observed. Some quality improvement teams struggle and cannot make improvements as well as other teams can.
When Kaplan arrived in Cincinnati, researchers there were very interested in understanding what causes quality improvement interventions to be more or less effective in different contexts. Just at that time RWJF was launching a new program, Improving the Science of Continuous Quality Improvement Program and Evaluation (for more information, see the Program Results Report). A team came together—in addition to Kaplan it included a leading expert in quality improvement execution, an investigator with experience in network-based quality improvement, and a colleague with a business background and methodological expertise—to explore this question with RWJF funding.
Developing a framework for contextual factors. Kaplan and her team believed that quality improvement methods themselves work—there is a long history of them working in other industries. "The difference has to be something else," she said, "and it has to be context."
To create a framework for understanding contextual factors in quality improvement, the team reviewed some 13,000 article citations from both health care and business, and abstracted data from 100 of them. They then invited 10 quality improvement experts from both health care and industry to participate in creating a model. The process involved a literature review, two rounds of opinion gathering to identify important contextual factors and clarify definitions, and an in-person meeting to tease out the relationships among the factors.
They tested the resulting Model for Understanding Success in Quality (MUSIQ) in 74 quality improvement projects, operating in three different settings, through a web-based questionnaire completed by staff from these projects. They intentionally chose the following settings to provide variation.
- Some 43 Strategic Improvement Priority Initiatives at Cincinnati Children's Hospital. Kaplan describes this setting as "the same organizational context, but different micro system contexts [i.e., unit/department/office] in which these projects are occurring."
- Some 19 projects from the Ohio Perinatal Quality Collaborative. "This is multiple hospitals working on one of two different improvement projects—the same project across different hospitals," Kaplan said.
- Some 12 projects from the Institute for Healthcare Improvement's Improvement Advisor Program. "This is the most variable," Kaplan noted, "with different organizations working on different projects."
Project results: Context matters. MUSIQ identifies 25 key contextual factors at different levels of the health care system that are likely to influence the success of quality improvement efforts. Preliminary results of model testing indicate that contextual factors with the greatest total effects on quality improvement success are:
- Resource availability
- Characteristics of the quality improvement team (leadership, skills, decision-making processes)
- The motivation and quality improvement capability of the micro-system (i.e., the unit/department/office)
"This two-phase project worked well," Kaplan said. "The literature review supported our initial hunch that context was important—and that it was important to attack it in a systematic way. Then the testing gave us the confidence that we had established more validity for our model and were on the right track. Now we can really delve in and advance it further."
Working with the expert panel proved to be beneficial. "We found it was important not just to identify the critical factors," Kaplan said, "but also to think about them across all levels of the health care system and to think about the relationships between different factors and quality improvement success. In this way we can better understand the mechanism of action and how these factors influence success. Ultimately, that will be important when trying to modify them."
Refining the model and building the field. Kaplan and her colleagues continue to refine the MUSIQ model through further testing. At the same time they are using it in quality improvement efforts at Cincinnati Children's and in collaboration with others interested in applying the model in other settings. While she has branched out in a handful of different areas, Kaplan is committed to continuing the work on context. "All of my research is related to applying, studying, and improving quality improvement methods in order to get better outcomes or identify new ways of improving care."
Kaplan is grateful for RWJF's support. "The work we did with RWJF was theory-development and got us to the point of the model. Now there's a lot to do to move it forward and we're hopeful that what we've created will be helpful to others, that people will use it and improve it, and that it will help build and advance the field. That is our goal."
RWJF perspective. Improving the Science of Continuous Quality Improvement Program and Evaluation funded teams of researchers to address a core question within health care environments: "How will we know that change is an improvement?" Research teams tackled an array of projects aimed at improving evaluation frameworks, quality improvement measures, and data collection and methodology. The Robert Wood Johnson Foundation (RWJF) authorized the program for up to $1.5 million for 48 months, from August 2007 through August 2011.
"There is a lot of talk about the way we do quality improvement but not a lot of organized initiatives about how we actually do research about quality improvement," says RWJF director Lori Melichar, PhD, MA. "I am proud of the results of these nine projects. They addressed the challenges we were experiencing by creating and testing survey instruments, developing new research and evaluation methods, and exploring the importance of context in quality improvement. We have created something of a community."