Comparing Errors in Medicaid Reporting Across Surveys

Surveys of Medicaid insurance coverage vary in design and accuracy.

Medicaid enrollment levels need to be accurately measured in order to track reform progress of the Patient Protection and Affordable Care Act. In surveys, for example, sometimes Medicaid enrollees are counted as uninsured, when in fact they have public insurance rather than lacking insurance altogether.

These investigators looked at existing research of the accuracy of respondent reports on public insurance, which includes Medicaid and State Children’s Health Insurance Plan.

They found studies were one of two designs. Experimental studies conducted in conjunction with statewide general population studies use administrative data and then send a survey to a sample and match how respondents report their coverage compared to the known administrative data (accuracy 74–89%). Matching studies take information from an existing survey and compare it to corresponding administrative data (accuracy 57–82%).

Key Findings:

  • Experimental studies show higher levels of accuracy than matching studies.
  • People with Medicaid who no not report it may misattribute it to some other form of coverage.
  • Coverage reported in some studies are more accurate than others due to time lag reporting (past year coverage versus a single point in time).
  • Both experimental and match studies have modest inherent selection bias.

“Knowing the strengths and weaknesses of the various sources of survey estimates, why they differ, and how they can be improved is important so that they can be appropriately used in health policy research,” the authors conclude.