The CARES UP Model
Step Seven - Look UP

How will leadership know if the program they are building can be sustained over time? Or if the program made any progress? The purpose of this step of the model is to measure the impacts of implementation efforts. It is essential to measure progress on a consistent and ongoing basis. Many times, we are working getting tasks accomplished and forget to stop at regular intervals, look up and measure our success.
Here, we Look UP and look back at all the steps of the model, take inventory, and review our goals:
- Step UP: Are we doing what we set out to do in our call to action?
- Build UP: Are we following the suggestions of the model in Build UP, and growing our infrastructure promoting a culture of mental health wellness?
- Check UP: How are we incorporating staff feedback? How has our team progressed in addressing the goals outlined in our Action Plan?
- Train UP: Review our training logs and progress, have we chosen your mental health and wellness training, how many staff have completed it, and are you on your way with resiliency trainings?
- Pair UP: How effective is our peer support model? How are we measuring this? How many connections have we made with community behavioral health providers, what is the feedback from those who have utilized these resources?
- Talk It UP: Are staff aware of the CARES UP initiative? How many resource flyers have been shared/posted? Have you worked with your local media to highlight implementation successes or written a success story of your own? Have you begun to measure if your communication plan is effective?
Evaluation of New York State’s implementation of the CARES UP model is ongoing. The program is improved as we learn the best ways to support the wellness of the state’s uniformed personnel. Below are some points for consideration for those working to develop their own CARES UP programming.
Resiliency:
One of the goals of the CARES UP model is to enhance resiliency among uniformed personnel. Resiliency is a multifaceted concept. There are several different approaches to measuring resiliency. One approach emphasizes positive adaption following difficult experiences1 and provides a helpful way to organize resiliency into three categories: attributes/resources, process, and outcome.
These are summarized in the table below:
Table X: Three Aspects of Resiliency

Each aspect may be relevant to the CARES UP model. For example, measures of processes may be useful because these may be targets for training. Measures of outcomes may be useful for evaluating results of a program. Measures of attributes/resources may be useful to the extent that they can be changed and/or occur at the organizational level (e.g., social support from co-workers). Measures have also been developed specifically for the workplace, although these are not necessarily focused on uniformed personnel. One measure called the Response to Stress Experiences Scale, which has a shortened four-item version, was developed for military personnel and has been used in first responders.2,3
For evaluation of the statewide CARES UP implementation, the project team chose to use the Brief Resilience Scale (BRS). The BRS measures the “outcomes” aspect of resilience, and it is widely used, brief, and in the public domain. The BRS is comprised of six items that are rated on a five-point scale: strongly disagree, disagree, neutral, agree, and strongly agree. The items are as follows:
- I tend to bounce back quickly after hard times
- I have a hard time making it through stressful events (R)
- It does not take me long to recover from a stressful event
- It is hard for me to snap back when something bad happens (R)
- I usually come through difficult times with little trouble
- I tend to take a long time to get over set-backs in my life (R)
The BRS or other resilience scale may be included in surveys of personnel that are conducted before and after implementing the CARES UP model (and more frequently if desired). This can help assess whether the program is impacting resiliency levels of the staff as a whole or among subgroups of interest.
Impact Evaluation:
Below is a summary of some outcomes that may be useful to measure as part of evaluation. Organizations may choose to tailor these outcomes to their individual needs and capacities. Some ways you may want to collect data are:
- Conduct surveys of staff and/or leadership,
- Do structured interviews with key personnel,
- Hold informal discussions,
- Review administrative data.
Free online survey development and analysis tools (for example, Survey Monkey or Poll Everywhere) may be helpful to measure resiliency, attitudes, and other constructs while maintaining staff anonymity.
Table X: Example Outcomes and Measurement Approaches for Evaluation

Example Data:
New York State’s CARES UP program included a baseline survey of staff members. The survey covered several different concepts. These included things like resiliency, attitudes, and awareness of current wellness resources. Staff will complete the survey again at the end of the program to assess change.
As part of the baseline survey, respondents answered questions about different components to wellness. They were asked what they thought was essential to their field, which their agency already offered, and which they would like to see expanded. The chart below shows the number of responses to these questions. The height of each bar indicates the proportion of respondents who checked each item.
The components that were identified as essential by the most people were physical wellness (including physical fitness, sleep, and nutrition) and stress management. Next were peer support, suicide prevention, and family wellness. Physical wellness, stress management, and family wellness were also identified as needing expansion.
For most items, the number of people that said they would like to expand programming was greater than the number who indicated that they already had programming. This means that in general, people felt these needs are not currently being met. This helps to identify areas with the greatest perceived need. There was also a large number of people who were not sure what components their agency offered. Collecting this information can help leadership design programs that fill staff needs.
Figure X: Example Baseline Data on Staff Views of Components to Wellness

The BRS was also included in the baseline survey to assess current levels of resilience and serve as a comparison for evaluating the impact of the program. Total resiliency scores were calculated as described above. The distribution of scores is shown in the plot below.
The horizontal axis shows the range of scores. The height of the bars indicates the number of respondents with scores in the corresponding range. We can see from this plot that the scores were generally high. Many people had scores of 4 of out 5, meaning that on average they selected “Agree” for most positively worded items (e.g., “I usually come through difficult times with little trouble”) and “Disagree” for most negatively-worded items (e.g. “it is hard for me to snap back when something bad happens”). The average score was 3.71 and half the respondents had scores above 3.83, also attesting to the generally high level of perceived resilience.
Figure X: Example Baseline Data on Levels of Staff Resiliency

Measuring resilience may help agencies address causes of lower resilience among staff. For example, an agency may find that resilience is lower among staff who work certain shifts. The agency may then consider ways their programming can better support those workers. Consider what you deem as significant factors ahead of time so that you can include them in the survey. Designing the survey to be anonymous may increase participation and more honest answers. Measure important factors again in the follow-up survey to see if resilience improved for those groups.
Ongoing Evaluation:
Constant quality improvement is essential to any good evaluation and improvement initiative. It is important to Look UP throughout the course of a program to assess how the program is doing and identify opportunities for improvement. It is recommended to check in at least annually, if not more often, to identify changes on certain outcomes of interest. An example table is shown above. You can do so by repeating the initial assessment tools you used in the beginning from the Check UP section, such as the member support survey.
Our suggestion is to conduct follow-up at the two-year mark of implementation. The results can show progress or areas of growth needed. Any necessary changes identified can be addressed through adjustments to the model. You can follow the same process to re-assess to be sure the adjustments were affective. The cycle of implementation continues as you continue the CARES UP model.
Some outcomes, like changes in behavior, may take longer to see. Ongoing evaluation can provide valuable information about satisfaction, acceptability, and perceived benefit of program. It will be important to assess staff perceptions about training modules, for example, so that they can be modified as needed. Modules with built-in post-training surveys can be helpful. You may also want to consider using informal check-ins with key individuals who are likely to be in touch with common staff opinions.
Remember that many of the protective factors targeted by the CARES UP model, such as aspects of workplace culture, attitudes, and personal attributes, may be deeply entrenched and slow to change. This is more like a marathon, not a sprint. Thus, the fruit of your efforts may not be immediately visible. One recommendation is to select outcome indicators that are SMART: specific, measurable, achievable, relevant, and time bound. Evaluating the achievement of smaller, more specific steps on the way toward your long-term goals will make your progress more evident and provide more information on which to fine-tune your model to suit your agency’s needs.
The Center for Disease Control and Prevention (CDC) has a website devoted to information on program evaluation including an introductory guide that agencies may find helpful in assessing the impact of their CARES UP models.
Contact Us:
Our team wishes you the best as you introduce this Model to your agency. You are welcome to reach out to CARESUP@omh.ny.gov with questions. Please also join the CARES UP mailing list for the latest news and resources.
References:
1. Fisher, D. M., & Law, R. D. (2021). How to choose a measure of resilience: An organizing framework for resilience measurement. Applied Psychology, 70(2), 643-673.
2. De La Rosa, G. M., Webb-Murphy, J. A., & Johnston, S. L. (2016). Development and validation of a brief measure of psychological resilience: An adaptation of the Response to Stressful Experiences Scale. Military Medicine, 181(3), 202–208.
3. Ponder, W. N., Prosek, E. A., & Sherrill, T. (2021). Validation of the Adapted Response to Stressful Experiences Scale (RSES-4) Among First Responders. Professional Counselor, 11(3), 300-312.