In 2018, Maryland’s Montgomery County Department of Health and Human Services (DHHS) issued new guidance for contractors who deliver youth mentoring services, creating a uniform process for measuring quality and outcomes. The guidance identifies ways to measure outcomes that are supported by research, enabling the department to more effectively track the impact of its programs. Standardizing these measures—and the tools used to collect and report on them—also can help DHHS monitor its youth mentorship contracts consistently and report on county-wide mentoring results.

To develop this guidance, the department convened a work group at the end of 2016 that included subject matter experts and representatives from the Pew-MacArthur Results First Initiative. Together, they assessed the evidence supporting the county’s youth mentorship programs to verify that they were operating according to shared principles of effectiveness and to determine what metrics were already being tracked. The work group found that few of the programs used evidence-informed models (those that rely on national, research-based quality standards for mentoring programs), and that performance and outcome measures used to track services varied widely.

To better align county services with evidence-informed practices and to create a consistent way to track program performance, the group:

  1. Defined critical terms. Team members developed a shared definition and logic model for youth mentorship to provide consistency in what is considered “mentoring” and to distinguish programs that provide mentorship from similar efforts, such as tutoring services.
  2. Consulted the research and compared it to existing services. They used resources from the National Mentoring Partnership and the National Mentoring Resource Center (NMRC) to identify key practice elements and developmental outcomes of effective mentoring practices. The group mapped these against contracted programs to determine which met the definition for youth mentoring services, and the extent to which they covered key elements and outcomes.
  3. Selected appropriate measures and scales. The work group reviewed national outcome metrics and scales from the NMRC to gauge their appropriateness and applicability to the DHHS mentoring context. Through multiple rounds of consensus voting, members agreed on a set of measures to include in future contracts that align with existing youth mentoring services. These cover all seven domains covered by the NMRC, such as social emotional skills, because of their demonstrated potential to increase understanding of youth challenges and assets. The group also approved related scales provided by the center, such as questionnaires used to collect data on a given outcome.

The work group then produced the DHHS Guidance Document for Youth Mentoring Program Contracts, which will, once instated, direct contractors to measure targeted outcomes using a standard set of tools. The guidance spells out five important elements for future youth mentoring program proposals to incorporate. They are:

  • A definition of “youth mentoring” that the services should reflect.
  • Standards of effective practice that these programs should incorporate, where applicable, such as specified mentor recruitment and training strategies.
  • A list of pre-approved outcome measures that program providers can choose from to track and report results, such as self-esteem and academic performance.
  • Recommended scales and tools to track approved outcome measures.
  • Guidance on the use of measures to monitor program performance.

Future contracts will need to include measurements for at least one of the preapproved outcomes. To allow flexibility while fostering common measurement systems, the DHHS will allow collection and reporting of additional measures not included in the approved list. That means providers can track other metrics on their programs that may be needed to support service improvement.

The department anticipates that the new uniform process for measuring outcomes will help policymakers compare program results more easily and better quantify community-level impact.

“Because we have so many providers, this allows a standardized approach. It makes it easier to talk about results systemically,” said Matthew Nice, former manager of planning, accountability, and customer service at the DHHS.

For more information:

Written by Sara Dube and Priya Singh. Sara Dube is a director and Priya Singh is a senior associate with the Pew-MacArthur Results First Initiative.

Key Information

Publication Date
October 18, 2018

Read Time
2 min

Outcome Monitoring

Resource Type
Written Briefs

Results First Resources

Evidence-Based Policymaking Resource Center

Share This Page


Join the Evidence-to-Impact Mailing List

Keep up to date with the latest resources, events, and news from the EIC.