Federal Tool to Assess Community Health Centers May Be Flawed

RWJF scholar finds that a tool used to evaluate patient-centered care at community health centers might not measure the quality of care actually delivered.

    • March 21, 2012

The patient-centered medical home—a health care delivery model touted as an antidote to the fragmented, uncoordinated and reactive care many patients get today—is poised for growth thanks to a big boost from the federal government.

But there’s a fly in the proverbial ointment—at least when it comes to use of this model at community health centers. So warns Robin Clarke, MD, a Robert Wood Johnson Foundation (RWJF) Clinical Scholar (2010-2012) and an internist and researcher at the University of California in Los Angeles.

The federal government, Clarke says, may be using a flawed tool to assess the quality of patient-centered care at community health centers. And that could hamper the ability of these clinics to expand and improve care to the patients they serve, the bulk of whom are poor and uninsured.

“We really have a wonderful opportunity to improve and increase access to care in this population thanks to health care reform,” Clarke says. “But if we use the wrong tool to evaluate these centers, we might not appropriately incentivize and encourage effective practice reform.”

The federal health reform law enacted in 2010 included provisions to support the growth of community health centers and to improve the quality of care they deliver. One way it does that is by encouraging community health centers to adopt attributes of the patient-centered medical home, a model of care in which a multidisciplinary team of providers offers comprehensive, coordinated care on a continuous basis.

‘Lost in Translation’

The National Committee for Quality Assurance (NCQA), a nonprofit organization, has developed the most commonly used tool for assessing the degree to which primary-care practices are acting as patient-centered medical homes and, by implication, are delivering higher quality care. However, most of the evidence used to design this tool came from private practices.

Now, the federal government is using the same tool to assess community health centers. That could cause problems because community health centers serve a vastly different population than private practices. They often offer different types of services, many of which are not measured by the tool, to ensure impoverished patients have access to care, Clarke says. “There may be something that is lost in translation there.”

Indeed, Clarke’s study found no relationship between how well a health center scored on the NCQA tool and the quality of care it provided. The study was published on March 5 in Health Affairs, a leading health policy journal.

Continuing to use a flawed tool could have unintended—and dire—consequences, Clarke says.

First, it could prevent effective community health centers from getting the attention, and funding, they might otherwise receive if a more sensitive tool were used, Clarke says.

Second, it could mean that community health center leaders could place greater emphasis on services recognized by the tool and de-emphasize services that it does not recognize, such as transportation to appointments, language interpretation services, and help determining eligibility for support programs. That could mean that these kinds of “enabling” services begin to wane which, in turn, could limit access to care, Clarke says.

“It would be the health care equivalent of teaching to the test, but in this case, the test itself may be flawed,” Clarke says.

For the study, Clarke and his team examined the medical records of 50 randomly selected diabetic patients from each of 30 community health centers in the Los Angeles area, which serve some 600,000 patients annually. To assess the quality of care, they determined whether these patients had received five recommended diabetes screening tests in the previous year and noted how well three risk factors that indicate more serious conditions were controlled.

Clarke and his colleagues found that each of the 30 health centers would have qualified as a patient-centered medical home. But they found no significant relationship between a community health center’s NCQA score and the quality of diabetes care it delivered.

The findings suggest that the tool does not accurately measure the quality of care delivered at these community health centers. The study, Clarke notes, was limited because it only looked at the quality of care for one chronic disease and did not examine whether NCQA scores were associated with other medical home outcomes such as improved patient satisfaction or reduced emergency department use. Nevertheless, these findings indicate that more research is needed to evaluate the tool’s application to community health centers before it is used to measure and shape their performance, he says.

He and his co-authors cite several possible explanations for the tool’s failure.

First the tool’s format may not fully capture all of the ways community health centers can act as medical home establishments. To use the tool, for example, providers complete an online checklist that does not necessarily include all services provided by the center, including the kind of “enabling” services that facilitate access to care.

The scoring also poses problems, Clarke and his co-authors say. The tool, for example, places a heavy emphasis on technology relative to other aspects of the patient-centered medical home, he says.

Finally, he notes that the tool has not been proven to effectively measure the quality of care delivered by a community health center. “The major issue here is that the NCQA assessment tool was developed based on evidence of what worked for private primary-care practices that delivered care to insured patients,” Clarke says in a news release. “Because we have limited experience in applying the NCQA tool to community health centers, there is a question of what effective, patient-centered care for low-income patients actually entails."

Most Requested