Archives

Tagged ‘evaluation‘

My Love Note to Funders about Outcomes

 

Dear funders,

This is so hard to tell you but sometimes you just expect too much.

On the one hand, you want us to serve those who most need help. You tell us that the hardest to serve should be our target group. No creaming allowed. If we’re really good at what we do, we won’t be afraid to take the toughest clients:

– the chronically homeless with untreated mental illness;

– the long-term unemployed with no high school diploma or marketable skills; and

– the heroin-addicted mother whose children are living in foster care.

And so, because we know that these are the people who truly need our help and because we want to make our funders happy, we reach out to the people with the most serious problems. That’s when we remember: that’s why they’re called ‘hardest to serve.’

We just want to remind you, beloved funders, that ‘hardest to serve’ often translate into zeroes in the outcome column. People with complex, long-standing problems don’t seem to succeed on the ambitious timelines we set out for them in our grant proposals and program designs.

So what does this mean? It might mean that if we meet half our outcome goal, we are showing 100% more success for people than they would have had without us. It might mean that our results don’t tell the whole story about small increments of success in a person trying to find his or her way to a safe, productive life. It might mean that positive change is not a straight line, it zig-zags and sometimes stops altogether for long periods.

We know that funding is all about outcomes and that’s a good thing.  Expecting measurable results makes for better programs and greater accountability.

Just try to match your expectations about results to your desire to put your resources where they will do the most good.

Respectively,

Your funded agency

 

 


Print pagePDF pageEmail page

Seeing is Believing

When evaluating a program or service, nothing beats a site visit.  Yes, it’s important to review the numbers, look at the logic model, quantify outcomes, and gather customer/client satisfaction data.  These fundamental sources of information are essential to painting the evaluation picture.  But the heart and soul of an evaluation comes from face to face meetings, observations, and ‘walking around’ a program.

I will be doing three site visits in September – three very different agencies in very different parts of Wisconsin, requiring a lot of travel and a lot of time.  So why not just interview people over the phone or do a ‘Go To Meeting’ virtual meeting?

Here’s the answer:  I can’t tell if there’s a ‘there’ there unless I go see.  Seriously, the ability of executive directors to describe their programs in glowing terms is legendary.  If so inclined, an enthusiastic executive director can turn tens of participants into hundreds, good outcomes into astonishing accomplishments, well, you get the idea.  If I’m evaluating a program, I need to make sure the program is operating as described, the participants are really present and engaged and the outcomes are legitimate.

In my experience, these are the things that make for a great site visit:

1.  Genuine welcome:   This begins at the front door.  Do people know I’m coming?  Are they gracious and friendly?  Are the people I need to see available?  Does it appear that the evaluation site visit is a priority?

2.  Openness:  Do people appear to be sharing information freely?  Or are they guarded in what they share?  Does everyone in a group discussion speak or just the executive director?  Are people nervous about sharing or eager to tell their story?

3. Confidence and pride: Are people proud of their organization and happy to tell their story?  Are they willing to share war stories, to describe barriers or problems encountered and how they were overcome?

4. Inclusiveness: Does the executive director leap up to go find “Mary” who is the expert in a particular area or call in a client waiting at the front desk to relate his experience with the program?  In other words, does the executive director or program staff want to include others in explaining the program? 

5. Real Deal Feel:  When I leave, do I feel like I saw the real deal or a show staged for my benefit?  There’s no way to quantify this, but an experienced evaluator can sense an artificiality in the site visit that lets her/him know that the real program wasn’t shared (and may not actually exist).

These are the things I’ll be looking for in September as I travel around Wisconsin.  What about you?  Done evaluation site visits?  Been site visited?  What have been your experiences?  What can we learn from you?


Print pagePDF pageEmail page

Ask the Consultant: Evaluating a Program You Don’t Like

What do you do as an evaluator when you really don’t like or support the program approach you are evaluating; say, it’s something contrary to your principles or beliefs?

This was a question asked by an Alverno University student of me and several evaluation colleagues who were speaking to her class last week. One colleague recounted a major evaluation focused on a teen pregnancy prevention approach he couldn’t endorse.  I recalled instances where, in the course of an evaluation, I encounted agency practices with clients that made me uncomfortable, even angry.  We all agreed that this problem comes up a lot for evaluators since, being human beings, we have often have very strong personal beliefs.

When this happens, though, there is an enormous risk of one’s personal beliefs influencing the objectivity of the evaluation.  This can happen in such subtle ways that even the evaluator isn’t aware that his/her biases are shading everything – the construction/selection of evaluation instruments, the content of interviews, and the interpretation of observed activity. While it is far better and a lot more fun for an evaluator to evaluate an approach he/she fundamentally endorses, the opposite is often true.  When in this situation, a couple of possible strategies might be useful.

First, one of the evaluators on the panel reminded us all that every program deserves a decent evaluation, sort of on the order of everyone accused of a crime is entitled to legal counsel.  Good thing to keep in mind.  Every program approach benefits from a thorough, well-conceived and implemented process and outcome evaluation.

Second, when an evaluator is put in a position of having to fairly evaluate a program approach he/she doesn’t like, the bottom line is sticking with the process.  This means evaluating a program based on its program design/logic model.  Period.  This means not letting alternative or more philosophically attractive approaches enter into the analysis as implicit or explicit points of comparison.  This is tough, but essential.

Third, the evaluator simply must keep her/his biases in check and be extra vigilant about avoiding any opportunities to go looking for evidence to support those biases.  Because an evaluator often has a lot of control over how success is defined and measured, this can be extremely challenging.  Basically, to do right by the evaluation, the evaluator has to put on and keep wearing the mantle of objectivity even when it chafes.

 These are some ideas about handling this thorny situation.  In future blog posts, I’ll be tackling other questions that have been posed to me about planning, grantwriting, collaboration, and professional ethics.  If you have a question, let me know.  Be glad to take a crack at answering!


Print pagePDF pageEmail page

Fix the Right Problem

When something terrible happens, we want to do something to prevent a recurrence.  A baby dies while sleeping with his mother and local officials and the public at large want to see a strategy presented that will keep such an awful thing from happening again.  The rate of HIV/AIDS increases among young gay African American men and a new program targeted at this group emerges.  This effort to jump in quickly to try to prevent another accident, another death, and more community sorrow is laudable but flawed.  Here’s why.

We can spend a lot of time and money trying to solve the wrong problem.  The diagnostic process is very abbreviated when a group of people want to see action right away.  “I don’t care what we do,” I’ve heard more than once.  We just need to have some action on this. Send the community a message that we’re going to do something about it.  No one wants another study group or task force, they’ll say.  Let’s just get moving!

My experience is that people hardly ever really know what needs to be done.  Faced with a disturbing community event or trend, say an 11-year old waving a gun around on a local playground or the smoking rate among young adults suddenly jumping several percentage points, the leadership, including the content experts, will assume that they know a) the origins of the problem; and b) how to fix it.  More over, they will have a sense of certainty that will push all alternative explanations and ideas into a very small corner. This is a mistake. In order to solve a problem, we need to understand its origins. 

For example, if we respond the the 11-year old with the gun by implementing yet another violence prevention curriculum, will that prevent other kids from bringing weapons to school?  No, it won’t, unless we spend the time figuring out why kids think it’s a good idea to bring a gun to school.  First of all, why is there a gun at home where the child can reach it?  Second, what was this child’s and most children’s thoughts when they bring guns to school?  Are they wanting to impress, joke around, scare somebody?  Are they being bullied?  (This is our very favorite explanation now.)  Are they the bullier?  Is the point of intervention the child?  Or is it the parent?  If we up the violence prevention curriculum and there is still a gun lying on the dresser at home, have we changed this child’s mindset?  I don’t know.

It is very possible to have wonderful programs with great outcomes that have little or no effect on a community problem.  It happens all the time.  It happens because program designers, funders, and implementers are often too sure of themselves and their solutions.  Even an evidence-based approach is no insurance that a program will have an impact on the community even if the program’s participants have positive outcomes.  For example, taking our gun example, after a violence prevention curriculum, 80% of students thought it was a bad idea to bring a gun to school. Is this success?  Community change?  Not if the young person is having this positive thought while gunshots are being heard down the street.

The tricky thing about program design – deciding what to do – is that it requires time, patience, diligence, and courage.  New questions need to be asked of different types of people living in different neighborhoods and having different reasons for what they do and think.  By assuming we know what to do and how to do it, we sacrifice real impact for speed and the illusion of change.  Time to try a different approach.


Print pagePDF pageEmail page

Evaluation: Truth or Dare?

“The trouble with facts is that there are so many of them.” (Anon.)

It isn’t really true that numbers don’t lie.  Nor is the opposite true.  Everytime you’re given a report full of numbers, it’s not necessarily intended to bamboozle you. But sometimes it is.

Recently, I used the local decision to grant status as a charter school to Rocketship Education, a California-based enterprised that has reported amazing academic results, as a teaching tool in my evaluation workshop.  Rocketship wanted to establish itself in Milwaukee by providing educational programming in several low-achieving MPS schools.

I distributed an opinion piece to workshop participants written by Milwaukee School Board member Larry Miller http://millermps.wordpress.com/2011/10/27/journal-sentinel-op-ed-rocketship-charter-schools-need-scrutiny/ that urged the Milwaukee Common Council to delay a quick vote on the charter and look more closely at Rocketship’s evaluation data.

In the very first activity of the evaluation workshop, participants zoomed in on a number of issues, most notably, the schools’ high attrition rate and the low number of students with special educational needs.  They were convinced – there was no way the Milwaukee Common Council would approve a charter for Rocketship to operate in Milwaukee without more information.

Oh really.  The Council approved the charter with only one dissenting vote, an alderperson who suggested that more analysis needed to be done because of the critique of Rocketship’s evaluation put together by Mr. Miller.

A classic case of “My mind’s made up. Don’t confuse me with the facts?” I don’t know.  It was, however, a perfect lesson in program evaluation – how policymakers’ desire to do something meaningful fast can sometimes mean giving the shortest shrift ever to the facts.  Rocketship’s got a cure for MPS?  Great, let’s not waste anytime dickering about the numbers.

We have the capacity in Milwaukee to do a lot more sophisticated scrutiny of proposals like this – a couple of major universities, a lot of public interest research organizations.  We have the ability to compare and contrast, study and analyze, and choose based on good evaluation. 

I think that before we grab hold of the life preserver tossed to us from the new boat, it’d be nice to check out whether it actually floats.


Print pagePDF pageEmail page