Evaluation of the Building Healthy Teen Relationships Program

Start Strong: Building Healthy Teen Relationships (Start Strong) was a four year, national program of the Robert Wood Johnson Foundation and Blue Shield of California Foundation in collaboration with Futures Without Violence. Start Strong was one of the largest initiatives ever-funded to promote healthy relationships among 11- to 14-year-olds and identify promising strategies to prevent teen dating violence. 

The Start Strong evaluation consisted of two parts: an outcome evaluation and a policy evaluation. The overall purpose of the evaluation was to assess the overall impact of Start Strong by looking at:

  • the effectiveness of the program among students and teachers; and
  • the adoption, implementation, and sustainability of TDV prevention policy efforts in Start Strong sites.

About the Tools and the Evaluation

Outcome Evaluation: Selection of Schools for Student and Teacher Data

Start Strong was implemented in 11 sites across the country and included multiple components: school TDV prevention curricula, social marketing, parent activities, and policy efforts. For the curricula, grantees chose between two evidence-based programs: Safe Dates and The Fourth R. In accordance with Start Strong’s community-driven focus, grantees defined other components, which varied across sites. Given this variability, the following criteria were defined to maximize consistency in the sites that were part of the outcome evaluation:

  • implementation of Safe Dates to 7th graders during the 2010-2011 school term,

  • a minimum of 100 students per grade in order to have adequate statistical power, and
  • feasibility of participation in the evaluation.

Three sites met these criteria, collectively representing mid-sized and large urban areas, racial/ethnic diversity, and regional diversity. The quasi-experimental evaluation design matched four comparison schools to the intervention schools on the following criteria: school size; percent students on free or reduced lunch; race/ethnicity; and metropolitan area characteristics. Across the schools, the probability of students on free/reduced school lunch ranged from 43 percent to 95 percent. The Start Strong schools were in Indianapolis, IN (2), Los Angeles, CA (1), and Bridgeport, CT (1). The comparison schools were in Indianapolis, IN (2), San Diego, CA (1) and Saginaw, MI (1).


  • Students: Student data were collected in four waves: in fall 2010, spring 2011, fall 2011, and spring 2012 (grade 7 for waves 1 and 2; grade 8 for waves 3 and 4). A total of 2,626 students were eligible to participate. Parent consent and student assent were obtained from 1,487 students (57%; range of 44% to 71% across schools). On average, 96% of eligible students completed the survey (wave 1: 96%; wave 2: 93%; wave 3; 98%; wave 4: 96%)1. The student survey collected data on TDV-related attitudes and behaviors (see Appendix B for a detailed list of constructs). The sample was not nationally representative. At wave 1, the average age of participants was 12 years old. Gender and race/ethnicity were included as control variables in statistical analysis. The student participants were 50% female and 50% male. Race/ethnicity of youth was 23% white; 28% African American; 32% Hispanic; 17% other.
  • Teachers: Data were collected from 7th and 8th grade teachers at two waves: fall 2010 and spring 2012. Participants included core teachers (math, social studies, language arts, science) and “specials/electives” teachers (e.g., health, physical education, advisory). A total of 246 teachers across the eight schools were invited to participate; 185 participated at wave 1 (75% participation rate), and 125 teachers participated at Wave 2 (29% attrition rate from wave 1 to 2). Teachers reported on TDV-related attitudes and school policies (see Appendix C for a detailed list of constructs).

Policy Evaluation: Data Collection and Sources

RTI International (RTI) conducted the policy evaluation using multiple data collection methods and sources to address the research questions. These included:

  • Document Review. To provide additional information about formal policy related to TDV, RTI reviewed documents from each of the 11 sites.  Documents spanned multiple levels, including state legislation and SEA policy; LEA policy, administrative regulation and codes of conduct; and school-level student handbooks. Documents were provided by sites and/or identified through web searches. RTI reviewed all policies explicitly addressing TDV at the state, district or school level as well as policies addressing bullying and harassment, which might be applicable to TDV.
  • Structured Telephone Interviews. To describe each site’s efforts to influence and change policy and practice within the domains noted above, RTI interviewed each site’s “policy champion.” Each of the coordinators identified the person at that site who was most knowledgeable regarding TDV policy change efforts. RTI conducted interviews at three time points (early 2011, fall 2011, and fall 2012). The first interview established the status of the policy adoption/implementation process, key players, and their expectations about how the policy adoption/implementation would proceed over the next two school years. Subsequent interviews reviewed any changes to policy, efforts to inform policy, and any external events influencing policy adoption and implementation. Policy champions also reported on practice changes in key areas (e.g., universal teen and targeted dating violence education, staff training, parent education) that may either result from or precede formal policy change.  These interviews assisted the evaluation team in interpreting policy documents.  
  • Stakeholder Survey. To assess the impact of sites’ policy efforts among educators, RTI conducted a web-based survey of school staff and LEA staff in 10 sites. Between 5 and 23 respondents were recommended by each Start Strong coordinator as being the most knowledgeable regarding TDV policy and prevention. The survey instrument used items comparable to the teacher survey fielded in the outcome evaluation sites as well as items recommended for evaluation of policy advocacy efforts.

Download the methodology and review appendix A, B, and C for additional information. The surveys and instruments are also available for download.