Archives

Evaluation

Cold Case: Having to Construct an Evaluation after the Fact

Cold case detectives aren’t just on TV. Some of them are also called evaluators, experts called in to help a project complete an outcome evaluation after a program has been designed and implemented. In the worst situation,  a cold case evaluator is called in to complete an evaluation with no data or bad data. Frequently, time is short, a funding source is demanding a final evaluation report, and program staff are disinterested and maybe even antagonistic about having an evaluator look at their outcomes.

As a consultant who has been in this situation more than once, I have this to say: You would be amazed at what passes for data collection in many programs – hand-signed attendance sheets, ginned-up pre and post tests, and anecdotes galore. Interesting material, often, but not the stuff of decent evaluations.

What to do when you’re asked to evaluate a program that is nearing the end of its funding period and has had no solid evaluation system put into place? Here are some ideas gleaned from my own experience as a cold case evaluator.

#1: Enlist program staff in your cause.

A quick way to guarantee that you will never get any data with which to evaluate the program is to alienate the program staff. If they feel you are judging them or taking a superior attitude because you’re in the evaluator position, they will make your job harder. Instead of tsk-tsking your way around, make program staff your partners in telling the program’s story in the most accurate way possible.

#2: Use what you have.

Is there any program data? Separate the wheat from the chaff and use it. Are program participants still engaged? Develop a retrospective survey instrument to gather their insights about program impact. Is there a staff person who has been involved with the program from the beginning? Ask her/him a thousand questions. You may find out there’s more data laying around than anyone knew. They didn’t tell you because they didn’t think it was important. Moreover, an evaluation encumbered by lack of decent data can be greatly enhanced by attention to good process evaluation. In that case, telling the program’s story through the views of informed observers can also give insight into the difficulty in establishing an outcome evaluation.

#3: Create a beautiful product.

Present whatever data you have in a clear, readable format. Use graphs and charts whenever you can. Compare the program’s results to the results of other similar programs. Bulk up the content with the insights of program staff and vignettes about representative participants. Include a carefully crafted and objectively stated list of ‘areas to consider for further development.’ In this list, be sure to include the need to design the outcome evaluation when the program is designed and to establish good data collection protocols from the beginning. Say this as a going forward recommendation, not as a criticism. By now, program staff know they missed the boat on designing an outcome evaluation, no need to rub it in. Last, make sure the evaluation report looks good. I work with a professional graphic designer on all my products; it’s money well-spent.

There are important things to be learned from every program’s implementation. Sometimes, we can’t measure all of them but often we can know more than we think if we are patient, professional, and persistent, just like a good cold case detective.

 


Solving the Riddle of Project Sustainability

Every grant application asks you how you will sustain funding for the proposed project after grant funding has ended. Nearly all funders see their role as launching new ideas, supporting pilot programs, and encouraging system change. For that reason, most funding sources limit their support to three to five years. After that, it is their expectation that other sources of support will be found. That’s why a grant application will ask you to describe your strategy for sustainability.

Do you have one?

Right now, I’m working on a sustainability plan for an innovative program which is two years into a three-year federal grant. This has caused me to think hard about what needs to go into sustainability planning. Here are my thoughts:

Start early: There’s a reason why funding sources ask you to talk about sustainability in the grant application. It’s because that’s when you should be thinking about it! Last minute sustainability planning equals panic; that’s not productive.

Engage good partners: This also should be done early. Bringing in key partners at the beginning insures their input in program design and operation and gives them time to think about their own organization’s role in sustainability. If each key partner can see how the project benefits his/her organization, their contribution to sustainability will be enhanced.

Operate a good program: Self-evident, maybe, but you’d be surprised how many projects have slow start-ups, heavy staff turnover, poor recruitment, and other impediments to showing impressive results. Unless a program has good outcomes that indicate future, even better, success, sustainability is practically impossible.

Determine what’s worth sustaining: Not every program component will make the cut. It’s important to have a critical eye toward the program, think objectively about what’s working well and what isn’t and consider program modifications or even redesign to strengthen winning components.

Develop a compelling case statement: This has two ingredients: 1) an analysis of participant outcomes that demonstrates that people do better in this program than without it; and 2) an analysis of costs associated with the program as compared to business as usual. You want to have a strong answer to potential funders’ question: Is this program better than what we are currently doing?

Find the connectivity: Among your partners, who benefits most from the program? In the broader community, including government, human service systems, foundations, who stands to benefit from the results your program is providing? Finding these connections and weaving them together into a network of interest and support for the project is critical.

Educate: There are many ways to educate and a project focused on sustainability needs to employ them all. Having good program materials, using print and social media, making presentations to conferences and groups of foundations, and seeking opportunities to educate the broader community about the project are all critical sustainability steps. Every member of a collaborative effort should be able to educate others about the project.

Connect the resource dots: Sustainability may be the result of new funding, realignment of existing funding, increased in-kind resources, greater use of volunteers, institution of a fee structure or all of the above and more. What is clear from experience is that one single funding source is unlikely to be the savior for a program; there needs to be a network of support if long-term sustainability is to be achieved.

Project sustainability is a tough question but without careful thought and planning, a great project can evaporate at the end of its initial funding. Time to start planning is now.

 

 

 


Was It Worth It? How to Create Metrics for Events

Events are terrific.  If you’ve worked for a nonprofit organization, you have probably been involved in planning, staffing or cleaning up after an event.  It could be a neighborhood clean-up or back to school fair.  An event can be a promotion for a new program or a way to identify potential clients. 

When I managed Community Involvement at the Social Development Commission, we put on a slew of events, gave away thousands of hotdogs and neighborhood swag, like tote bags and refrigerator magnets with community service phone numbers on them. One communitywide planning event had an auditorium full of neighborhood residents doing a conga line through the aisles to the beat of an Indian drum. This made us all feel terrific.

But what did it really mean?  Most of the time, event organizers/sponsors use three metrics to decide if an event was worth the investment: 1) number of participants; 2) number of problems with the event; and 3) how happy we feel.  An event that fills the room, doesn’t have a catastrophe like running out of food, and leaves us humming while we clean up is an absolute success.

Is it possible to do a better, more substantive evaluation of an event?  Absolutely!

Here are some ideas to consider:

1.  Survey participants.  Yes, I know.  No one wants to interrupt the Kumbaya moment with a clipboard and a checklist.  A quick, post-card survey with 3 to 5 questions can provide actual data that can be used to determine what participants thought was valuable, what other information or resources they might like, and what potential impact the event will have on their lives.  The West Allis Health Department conducts an annual event called Two for the Show which is a developmental screening with various ‘stations’ to assess toddlers’ speech, large and small muscle development, and other developmental milestones.  This is one of the Health Department’s primary ways of identifying children in need of Birth to Three services so they are able to track identification of children with developmental challenges as they show up in the Birth to Three program.  Over and above that, however, the Health Department conducts a survey of each Two for the Show parent. Very smart strategy – makes funders happy and helps shape the next event.

2.  Service utilization. This is a variation of ‘tell them Fred sent you.’  Since many events, like the events we used to hold at SDC, are geared toward encouraging program participation in programs like Head Start or energy assistance, it is very helpful to connect participants’ attendance at the event with the eventual enrollment in a program.  This can be as simple as handing someone a card and asking them to mention the event when they call the program and offering some benefit for doing so.  This could be expedited enrollment or a small premium like a McDonald’s gift certificate.  Anything that helps you as the event organizer connect your event to later program participation is a big plus as you seek support for next year’s effort. 

3.  Tracking What Happens Next.  Events are often the vehicle for addressing a community need or problem.  Generally, the event creates several work groups with the hopes that when people leave they are willing to work on specific tasks in order to achieve an agree-upon goal.  Very often, big community organizing events are unable to translate to dynamic, robust work groups and so the energy and promise of the event just dissipates.  Vision Sherman Park, a tremendously inspirational community planning event that brought together observant Jews, African American, and White residents for a day of planning and dialogue, had a less vibrant transition to work groups.  That experience toward me that assessment of the follow-up is critical.  What happened afterward?  Who stayed involved?  Who didn’t?  What can be improved next time?

Ultimately, it’s all about a critical eye that looks beyond the momentary happiness of a ‘successful’ event to examine its true value and impact.  It’s not difficult, but it takes take planning and commitment.  When your next event rolls around, try to take a harder look at the issue of metrics.  I think it will pay off for you.

 


Seeing is Believing

When evaluating a program or service, nothing beats a site visit.  Yes, it’s important to review the numbers, look at the logic model, quantify outcomes, and gather customer/client satisfaction data.  These fundamental sources of information are essential to painting the evaluation picture.  But the heart and soul of an evaluation comes from face to face meetings, observations, and ‘walking around’ a program.

I will be doing three site visits in September – three very different agencies in very different parts of Wisconsin, requiring a lot of travel and a lot of time.  So why not just interview people over the phone or do a ‘Go To Meeting’ virtual meeting?

Here’s the answer:  I can’t tell if there’s a ‘there’ there unless I go see.  Seriously, the ability of executive directors to describe their programs in glowing terms is legendary.  If so inclined, an enthusiastic executive director can turn tens of participants into hundreds, good outcomes into astonishing accomplishments, well, you get the idea.  If I’m evaluating a program, I need to make sure the program is operating as described, the participants are really present and engaged and the outcomes are legitimate.

In my experience, these are the things that make for a great site visit:

1.  Genuine welcome:   This begins at the front door.  Do people know I’m coming?  Are they gracious and friendly?  Are the people I need to see available?  Does it appear that the evaluation site visit is a priority?

2.  Openness:  Do people appear to be sharing information freely?  Or are they guarded in what they share?  Does everyone in a group discussion speak or just the executive director?  Are people nervous about sharing or eager to tell their story?

3. Confidence and pride: Are people proud of their organization and happy to tell their story?  Are they willing to share war stories, to describe barriers or problems encountered and how they were overcome?

4. Inclusiveness: Does the executive director leap up to go find “Mary” who is the expert in a particular area or call in a client waiting at the front desk to relate his experience with the program?  In other words, does the executive director or program staff want to include others in explaining the program? 

5. Real Deal Feel:  When I leave, do I feel like I saw the real deal or a show staged for my benefit?  There’s no way to quantify this, but an experienced evaluator can sense an artificiality in the site visit that lets her/him know that the real program wasn’t shared (and may not actually exist).

These are the things I’ll be looking for in September as I travel around Wisconsin.  What about you?  Done evaluation site visits?  Been site visited?  What have been your experiences?  What can we learn from you?


Ask the Consultant: Evaluating a Program You Don’t Like

What do you do as an evaluator when you really don’t like or support the program approach you are evaluating; say, it’s something contrary to your principles or beliefs?

This was a question asked by an Alverno University student of me and several evaluation colleagues who were speaking to her class last week. One colleague recounted a major evaluation focused on a teen pregnancy prevention approach he couldn’t endorse.  I recalled instances where, in the course of an evaluation, I encounted agency practices with clients that made me uncomfortable, even angry.  We all agreed that this problem comes up a lot for evaluators since, being human beings, we have often have very strong personal beliefs.

When this happens, though, there is an enormous risk of one’s personal beliefs influencing the objectivity of the evaluation.  This can happen in such subtle ways that even the evaluator isn’t aware that his/her biases are shading everything – the construction/selection of evaluation instruments, the content of interviews, and the interpretation of observed activity. While it is far better and a lot more fun for an evaluator to evaluate an approach he/she fundamentally endorses, the opposite is often true.  When in this situation, a couple of possible strategies might be useful.

First, one of the evaluators on the panel reminded us all that every program deserves a decent evaluation, sort of on the order of everyone accused of a crime is entitled to legal counsel.  Good thing to keep in mind.  Every program approach benefits from a thorough, well-conceived and implemented process and outcome evaluation.

Second, when an evaluator is put in a position of having to fairly evaluate a program approach he/she doesn’t like, the bottom line is sticking with the process.  This means evaluating a program based on its program design/logic model.  Period.  This means not letting alternative or more philosophically attractive approaches enter into the analysis as implicit or explicit points of comparison.  This is tough, but essential.

Third, the evaluator simply must keep her/his biases in check and be extra vigilant about avoiding any opportunities to go looking for evidence to support those biases.  Because an evaluator often has a lot of control over how success is defined and measured, this can be extremely challenging.  Basically, to do right by the evaluation, the evaluator has to put on and keep wearing the mantle of objectivity even when it chafes.

 These are some ideas about handling this thorny situation.  In future blog posts, I’ll be tackling other questions that have been posed to me about planning, grantwriting, collaboration, and professional ethics.  If you have a question, let me know.  Be glad to take a crack at answering!