The Robert Wood Johnson Foundation Approach to Evaluation

Program evaluations are a cornerstone of our efforts to learn from our work. In any given year, the Robert Wood Johnson Foundation (RWJF) conducts between 30 to 40 active program evaluations and invests about 5 percent of our annual grantmaking budget on evaluations. Evaluations are different in size and scope. They range from small program assessments that rely to a great degree on expert judgment, to multimillion dollar mixed-method evaluations of program outcomes. "Mixed-methods" means that a variety of quantitative and qualitative approaches are employed.

RWJF's approach to evaluation focuses on the following key tenets:

Solicited: In almost all cases, Foundation staff members generally determine the direction of a program evaluation and then open the evaluation for a limited competition. While RWJF usually develops initial evaluation plans in consultation with the director of a funded program and key stakeholders, the selected evaluators heavily influence the final evaluation designs.

Learning-Centered: In its early days, the Foundation designed most program evaluations to test the effectiveness of a particular attempt to address an issue or problem. Today, we employ a much more varied approach, where evaluation designs reflect where the greatest opportunities for learning lie. Learning will often focus on:

  • how to improve the program
  • how best to create social change
  • ways to document helpful approaches and best practices for the field and
  • ways to improve our grantmaking.

To be suitable for learning, the evaluation's focus and questions need to be suited to the maturity of our grantmaking in the area in which we are working. In cases where the information base is less developed, our evaluations might focus on understanding promising approaches for making change. For example, we funded an evaluation of New York City's regulations to prevent childhood obesity through day-care settings. While the policy was implemented through public funding and the effort was relatively new, it was an ambitious approach that other cities and states were considering for adoption. In more developed areas, evaluations might center on refining or replicating promising approaches or assessing those approaches' outcomes. For example, we funded a large, mixed-method evaluation of Covering Kids & Families, to enroll children and their family members in government-funded insurance programs that focused on outreach to families, and simplification and coordination of health insurance coverage programs. (To learn more about the findings and lessons from this evaluation, visit our special feature, Assessing the Impact of Covering Kids & Families.)

Impartial: We try to assure that independent third parties follow approved scientific procedures, and avoid conflicts of interest to produce a constructive and impartial evaluation of the results of our programs. Evaluations are not intended to prove that our strategies are the right ones, but rather to assess the strengths and weaknesses of the strategies we've employed to achieve the results we seek.

Distinct from Monitoring and Accountability Efforts: Our program evaluations are separated from our grant monitoring and accountability functions. We do evaluations in the interest of learning, discussing and assuring the most constructive findings. Evaluations focus on assessing strategies, processes and outcomes, and on capturing the factors that account for program and our successes and failures. Monitoring performance and accountability of individual grants, or series of grants for a single project, is generally the responsibility of the Foundation's program officers, grants administrators or national program staff.

Intended for a Broad Audience: Audiences for evaluation findings include not only RWJF staff and Trustees but also policy-makers, researchers, advocates, RWJF program grantees and other program stakeholders. Many of the findings from our funded evaluations are available in peer-reviewed journals. We publish evaluation reports on our Web site. And we continue to seek more user-friendly ways to capture and share the many important lessons our evaluations hold for continuing efforts to improve health and health care.

Transparent to Grantees: To maintain a constructive relationship with grantees, it is imperative that an evaluation should clearly specify its purpose and the activities that will be undertaken by the evaluation team. This is best done from the Call for Proposals onward, to assure that grantees will know what is expected of them and what they can expect from the evaluation in return. Prospective applicants should have such information to decide whether they want to apply for support and to plan and budget accordingly.

Most Requested