People mean different things by the word "evaluation" and we want you to understand our meaning and purpose. The purpose of RWJF evaluations is to learn from a program’s policies or strategies. The goal is to learn and to inform the field. To do that, we need not just your cooperation, but collaboration as well:
This page clarifies our approach in order to gain the most constructive information for all concerned. It is not a perfect process, but it will get better with your help and active participation.
People have realistic concerns about evaluation. Work with us to address these. Evaluation is often thoughtlessly misused, or used for purposes that are very different from RWJF’s goal of learning. People are generally concerned about three things:
An important principle of the evaluation work is respect for the program. This means respect for your ideas, respect for your effort, and respect for your time and participation.
Improving the intellectual value of evaluation. We want and need your active participation in planning the evaluation. Sometimes this means your participation in asking the right evaluation questions and choosing the focus. Sometimes, the focus will have been determined before you get the award, so that we can defend a budget to the Foundation's trustees. However, there is often still some room for revision, and there is still room to make sure that you will be getting appropriate, valuable information.
Please work with us to make sure that constructive, high quality information comes out of the evaluation.
Better planning and logistics. Realistically, evaluation needs to depend on the participating grantees to provide information. That means we need to consider seriously how much we can impose on the program for data collection and other information. It’s called response burden. Either we need to minimize it, or we need to make sure that the response burden is worth the trouble. We can do this in several ways:
All of these methods have been successfully used in the past. Some examples:
In the Active for Life initiative, we requested that each grantee allocate 25 percent time to an evaluation liaison—not a researcher, but someone intimately involved with the program, who could serve as gatekeeper for interviews and for the flow of data to the evaluation unit. In exchange, grantees got valuable information about the effects of the program at their locations that they were able to share with the parent organizations. In several cases, this led to expansion of the program within those parent organizations. For example, the YMCA of Chicago expanded the program throughout the metropolitan area. Another grantee requested technical assistance from the evaluation team to help them maintain quality control after the grant ended.
In the Prescription for Health initiative, grantees worked together with the evaluation team on data collection—these efforts resulted in joint publications that positioned the grantees for new awards from National Institutes of Health, the Agency for Healthcare Research and Quality, Centers for Disease Control and Prevention and other sources.
In the Aligning Forces for Quality initiative, the evaluation team is providing quality profiles to each of the 14 participating regions that will permit better planning and targeting of quality improvement efforts to problem spots. In addition, the evaluation team is working on a few site-specific substudies where the grantees believe there are special opportunities to learn more about how to do quality improvement.
Where do we start? Ideally, there was at least some detail about the evaluation in the Call for Proposals. In addition, the Frequently Asked Questions are a good place to seek clarity on what will be expected from grantees in the evaluation. Applicant workshops also feature presentations by the evaluation team. Finally, once the grants are awarded, you can look forward to an early "get acquainted" visit or phone call from the evaluation team.