Evaluation: What Grantees Can Expect

People mean different things by the word "evaluation" and we want you to understand our meaning and purpose. The purpose of RWJF evaluations is to learn from a program’s policies or strategies. The goal is to learn and to inform the field. To do that, we need not just your cooperation, but collaboration as well:

  • To improve the intellectual value of the evaluation
  • To assure better planning and logistics.

This page clarifies our approach in order to gain the most constructive information for all concerned. It is not a perfect process, but it will get better with your help and active participation.

People have realistic concerns about evaluation. Work with us to address these. Evaluation is often thoughtlessly misused, or used for purposes that are very different from RWJF’s goal of learning. People are generally concerned about three things:

  1. Imposition of someone else's ideas on the program. To some extent this has to happen because there are many stakeholders for the program and many interests that need to be satisfied. But we can minimize problems by your active participation in the roll-out of evaluation. And we can honor your ideas about what the program is supposed to accomplish.
  2. Whether their own effort or performance is being evaluated. RWJF evaluation is about learning not accountability. That means we have to believe in the quality of the grantees that receive the money. We believe you are making every effort to do the work. It is not necessary for us to be involved in your performance—in fact, that gets in the way. Other staff are monitoring performance, but not evaluation teams. The best way to handle this concern is to offer some examples, which appear below.
  3. Response burden of answering useless questions. We recognize that data collection takes time and energy that you might be able to devote to the program itself. Therefore, we will work with you to minimize this burden and to maximize the usefulness of the information.

An important principle of the evaluation work is respect for the program. This means respect for your ideas, respect for your effort, and respect for your time and participation.

Improving the intellectual value of evaluation. We want and need your active participation in planning the evaluation. Sometimes this means your participation in asking the right evaluation questions and choosing the focus. Sometimes, the focus will have been determined before you get the award, so that we can defend a budget to the Foundation's trustees. However, there is often still some room for revision, and there is still room to make sure that you will be getting appropriate, valuable information.

Please work with us to make sure that constructive, high quality information comes out of the evaluation.

Better planning and logistics. Realistically, evaluation needs to depend on the participating grantees to provide information. That means we need to consider seriously how much we can impose on the program for data collection and other information. It’s called response burden. Either we need to minimize it, or we need to make sure that the response burden is worth the trouble. We can do this in several ways:

  • Work with grantees on logistics of data collection, so it is less intrusive.
  • Pay grantees for their time and effort in data collection.
  • Give grantees information from the evaluation that they can use.

All of these methods have been successfully used in the past. Some examples:

In the Active for Life initiative, we requested that each grantee allocate 25 percent time to an evaluation liaison—not a researcher, but someone intimately involved with the program, who could serve as gatekeeper for interviews and for the flow of data to the evaluation unit. In exchange, grantees got valuable information about the effects of the program at their locations that they were able to share with the parent organizations. In several cases, this led to expansion of the program within those parent organizations. For example, the YMCA of Chicago expanded the program throughout the metropolitan area. Another grantee requested technical assistance from the evaluation team to help them maintain quality control after the grant ended.

In the Prescription for Health initiative, grantees worked together with the evaluation team on data collection—these efforts resulted in joint publications that positioned the grantees for new awards from National Institutes of Health, the Agency for Healthcare Research and Quality, Centers for Disease Control and Prevention and other sources.

In the Aligning Forces for Quality initiative, the evaluation team is providing quality profiles to each of the 14 participating regions that will permit better planning and targeting of quality improvement efforts to problem spots. In addition, the evaluation team is working on a few site-specific substudies where the grantees believe there are special opportunities to learn more about how to do quality improvement.

Where do we start? Ideally, there was at least some detail about the evaluation in the Call for Proposals. In addition, the Frequently Asked Questions are a good place to seek clarity on what will be expected from grantees in the evaluation. Applicant workshops also feature presentations by the evaluation team. Finally, once the grants are awarded, you can look forward to an early "get acquainted" visit or phone call from the evaluation team.