Evaluation of Maximizing Enrollment

The Program Being Evaluated

In 2008, the Robert Wood Johnson Foundation (RWJF) announced a major new grant initiative, Maximizing Enrollment for Kids: Making Medicaid and SCHIP Work (Maximizing Enrollment) designed to support states seeking to maximize the coverage of uninsured children who were eligible for either a state’s Children’s Health Insurance Program (CHIP) or its Medicaid program (approximately two-thirds nationwide.)   

Maximizing Enrollment sought both to increase coverage in these funded states and to create models and build momentum for other states to follow suit. More than 28 states applied for the Maximizing Enrollment grant, which included $1 million in direct support to awardee states and a combination of technical assistance and collaborative-learning support led by the National Academy for State Health Policy (NASHP).

After a competitive grant application process in 2009, RWJF awarded Maximizing Enrollment grants to eight states—Alabama, Illinois, Louisiana, Massachusetts, New York, Utah, Virginia, and Wisconsin—with funding extending for four years, from 2009 through 2012. All eight states had demonstrated a strong commitment to increasing children’s enrollment in public coverage programs, even in the face of difficult economic times and changes in political leadership. With economic conditions worsening in many states, these plans often featured efforts to improve administrative efficiency, in part because that could help preserve available resources for increasing children’s coverage. Subsequently, with the enactment of the Affordable Care Act (ACA) in 2010, the Maximizing Enrollment plans were broadened to encompass adults as well as children, and the grant program was rebranded to Maximizing Enrollment: Transforming State Health Coverage.

About the Evaluation

This evaluation was conducted from January 2009 to March 2013. In 2009, RWJF contracted with Mathematica to conduct a quantitative evaluation of the Maximizing Enrollment initiative. As originally conceived, the evaluation had two goals. First, drawing on administrative data from the grantee states, it would develop and monitor several measures of children’s enrollment and retention in public coverage. These measures would be shared regularly with NASHP (and RWJF) as a means of monitoring progress toward maximizing coverage in the grantee states and directing NASHP’s technical assistance. Second, it would estimate rigorously the impact of the grant initiative on children’s Medicaid and CHIP enrollment, including the effect both of major policy or procedural changes arising from the grant and of the initiative overall.

Prior to this report, the evaluation focused mainly on the first goal of measuring and monitoring states’ progress enrolling and retaining children in coverage; over the course of the grant, the activities tied to this goal became quite extensive. For example, the evaluation team participated directly in many of the NASHP annual grantee site visits, briefing states on their progress maximizing coverage (based on the monitoring data) and discussing a range of questions about the measures used, the quality of states’ data, and the opportunities for further analysis. As a follow-up to these meetings, the evaluation team conducted a number of ad hoc analyses for the grantee states, examining more specific aspects of their performance or assessing how they could improve their data or use them most successfully.

The evaluation team partnered with NASHP on a range of dissemination activities, focused on both specific findings and broader lessons learned from the formative evaluation. These activities included presentations at each of the annual grantee conferences, participation in selected NASHP learning collaborative calls and webinars, coauthorship of issue briefs on performance measures, and effective use of administrative disenrollment codes. The activities concluded formally at the end of 2012, though the evaluation continued to support NASHP on a modest, ad hoc basis through the final year of its extended grant.

This report presents summary findings from the second component of the evaluation: the analysis of Maximizing Enrollment’s impact on children’s enrollment in CHIP and Medicaid.

Knowledge and Impact

This assessment is designed to be comprehensive: it captures the combined effects of all the specific policy and procedural changes, as well as the effects of any further changes that cannot be easily measured or observed—such as improvements in staff culture, eligibility systems, or coordination across programs or agencies.

The evaluators estimate impacts following a balanced panel, difference-in-differences (DD) design that uses enrollment trends in a selected group of non-Maximizing Enrollment “comparison states” as the counterfactual against which to measure the impact of the grant. To ensure that these comparison states serve as a valid counterfactual, they apply statistical methods which ensure that the enrollment trends in the comparison and grantee states are well matched during several years prior to the start of the grant. In addition, using regression methods, they control, across states, for important confounders that might impact enrollment and vary substantially over time, including measures of economic conditions and numbers of children who are actually eligible for Medicaid or CHIP coverage.

Despite the many changes that took place over the course of the grant, the evaluators found no significant evidence that Maximizing Enrollment increased enrollment of children. Across numerous specifications, estimated impacts are positive but small (typically 1 to 3%) and not statistically significant. Although these findings certainly allow for the possibility that Maximizing Enrollment produced modest effects on enrollment that we cannot distinguish from chance, they strongly suggest that economic conditions and other factors contributed much more to the enrollment growth seen in the eight grantee states during the initiative.