A Framework for Identifying, Classifying, and Evaluating Continuous Quality Improvement Studies

A Profile of Lisa V. Rubenstein, MD, MSPH, a grantee of Improving the Science of Continuous Quality Improvement Program and Evaluation

    • June 25, 2012

The challenge. The health care field has adopted techniques from industry, such as continuous quality improvement (CQI), to improve patient care processes and outcomes. However, CQI in the health care field has come to mean different things to different audiences. Inconsistent reporting of CQI studies reflects this ambiguity and hinders work by improvement practitioners and researchers to use and develop the evidence base. Clearer terms and definitions for CQI would improve the reporting, cataloguing, and systematic review of CQI interventions.

A long-term commitment to quality improvement research. Lisa V. Rubenstein, MD, MSPH, has had a keen interest in researching quality improvement since she was a medical student at Albert Einstein College of Medicine in the Bronx, N.Y. That interest placed her in a "border zone" between health services research and quality improvement—two fundamentally different activities, the first focused on discovery and the second on application.

"It is the place to get hit by both sides over the head constantly!" she said. "To live in that zone is not easy."

After completing her clinical training at the University of California, Los Angeles, Rubenstein got a master's degree in public health as a RWJF Clinical Scholar at UCLA. She continues as a professor at the School of Public Health in addition to her other positions as Senior Natural Scientist at RAND Corporation, Professor of Medicine in Residence at the VA Greater Los Angeles and UCLA, and Director of the VA Health Services Research and Development/RAND/UCLA Center of Excellence for the Study of Healthcare Provider Behavior.

Rubenstein and her research colleagues found that health care quality improvement research was not getting appropriate peer-review and that investigators often did not know how to write useful papers. Working with the federal Agency for Healthcare Research and Quality (AHRQ), they helped convene a conference on methodology in quality improvement and invited journal editors to participate.

As a prelude to the conference, they asked expert leaders to identify articles that were, in their view, exemplars in the field of quality improvement. These became Rubenstein's "gold standard article set" through which to explore the different types of articles that are important in advancing quality improvement and implementation science.

Delving into the problem with support from RWJF. At about the same time, RWJF was launching its new program Improving the Science of Continuous Quality Improvement Program and Evaluation (for more information, see the Program Results Report). "It was a really wonderful thing that it came out at that time," Rubenstein said. "There are very few places where you can get funding for this type of 'spade work.' It didn't seem like anyone was going to take care of this problem and if we were going to work in this area we needed to do something to bring science into quality improvement writing in a way that worked."

Rubenstein and her colleagues at RAND engaged a panel of government and university researchers, journal editors, and directors of quality improvement-focused organizations to rate definitions and quality review criteria. The team then applied the quality criteria to the "gold standard" articles. They created a Minimum Quality Criteria Set to assist in developing and evaluating scientific literature that reports evaluation data from quality improvement interventions or reports effects on patient health outcomes or clinical processes of care.

The team also initiated a large-scale, interactive, online panel process using ExpertLens—a system developed at RAND for engaging large, diverse groups of stakeholders. Some 119 journal editors, evaluators, patient safety and quality improvement experts, and others participated in a three-week process during which they rated core CQI features, discussed the ratings, and did re-ratings where necessary.

Making progress. Out of this work, Rubenstein and colleagues created a six-item screen for identifying articles that contain key components of continuous quality improvement. For example, the screen asks:

  • Did the improvement initiative include data (quantitative or qualitative) collected systematically according to a design or plan with specified methods?
  • Did leaders of the improvement initiative meet to review feedback information during implementation?
  • To what extent were local conditions at study organizations or sites taken into account in the design and/or implementation of the set of specific changes for improving care?

When they applied the CQI screen to 106 quality improvement articles, they found each of the six features in the articles. Different articles included one or more of the features. For example, 64 percent of the articles included data that was collected systematically, and 61 percent described changes that were at least somewhat adapted to local conditions. All six features were present in 14.2 percent of the articles.

Enhancing electronic searching. Searching for quality improvement articles electronically still needs a lot of refinement, Rubenstein said. "Our best search strategy in terms of specificity and sensitivity still got 15,000 papers. Of course, if someone searches for quality improvement within a defined topic, like diabetes, there will be a much smaller number."

The difficulty of getting good, usable electronic search results for quality improvement articles keeps the field from moving forward, Rubenstein believes. "How can you learn from everyone else in this field if you don't have a way to electronically search?" Rubenstein asked. "As a reviewer, I find that authors of, for example, meta-analyses of quality improvement literature will have missed key papers they should have included. They are building on, in a sense, a random group of papers that they just happen to know or find somehow."

Rubenstein's team is attempting to deal with this problem through "machine learning"—a process in which you first search for papers by hand and then use a computer program to "learn" from the results of the hand search. That way, the computer begins to "figure out" what to do when someone searches on a particular term. They hope to eventually work with the National Library of Medicine to include a quality improvement word on MEDLINE, the search engine for biomedical literature.

"There are no quality improvement words that can help you get to this literature," Rubenstein said. "If there were, that would improve the searching. Electronic searching is less biased. Additional articles can be found in reference lists, but the electronic search would ensure that you pull in the relevant body of work in the areas in which you're working."

"Huge ripple effects." Rubenstein and her team have continued their work with funding from the Veterans Administration—and there's still a lot to do. "It's going to take some continued work in the field to get criteria that can actually be applied reliably," Rubenstein said. "These concepts are difficult to parse out and it's critical to get that right."

Articles about quality improvement also need to provide more information about how to apply interventions in another setting or use it as the basis for new research, she said.

"Quality improvement efforts are being published but are not being used," Rubenstein said, "because the investigators do not include the right information in their articles to make them useful."

The team's similar conceptual work in the area of patient safety has led to a series of publications that Rubenstein considers a natural outflow of the RWJF-funded project. "While these were not funded by RWJF they would not exist except for its funding," she said. "The Foundation really pushed forward an agenda of how we look at this field and how we make the language and the standards more clear.

"I think the impact of the CQI program on this field was very large," she continued. "Our project, in terms of bringing together a lot of people who can start to think about this in a systematic way, has had huge ripple effects, ultimately, that I would attribute to RWJF."

RWJF perspective. Improving the Science of Continuous Quality Improvement Program and Evaluation funded teams of researchers to address a core question within health care environments: "How will we know that change is an improvement?" Research teams tackled an array of projects aimed at improving evaluation frameworks, quality improvement measures, and data collection and methodology. The Robert Wood Johnson Foundation (RWJF) authorized the program for up to $1.5 million for 48 months, from August 2007 through August 2011.

"There is a lot of talk about the way we do quality improvement but not a lot of organized initiatives about how we actually do research about quality improvement," RWJF director Lori Melichar, PhD, MA, said. "I am proud of the results of these nine projects. They addressed the challenges we were experiencing by creating and testing survey instruments, developing new research and evaluation methods, and exploring the importance of context in quality improvement. We have created something of a community."