Make it Count! Client Service Evaluation with Jeff Couillard

Subscribe on Apple Podcasts or Spotify

Evaluation is often a pain point for a lot of us working in small nonprofits, as it often seems to involve so much work with unclear purposes for why doing it. Jeff Couillard’s insights shed light on how we can measure what truly matters and how to use the data we collect to inform our work.

 Don’t forget to become a supporter of our show!

Let's start by learning a little bit about your background and how you got to focus on evaluation work. 

So my background is actually not necessarily in the helping services, although I ended up there. I spent about a dozen years working in addiction and mental health treatment with youth at a residential drug treatment program here in Alberta. I came in not having an addictions background. So I got to ask lots of fun questions like: what is addiction? And what is addiction treatment? How do we know we're doing a good job and couldn't really get satisfactory answers. So from early days in the mid-2000s, as a youth worker, I was struggling with a lack of feedback about my impact. 

Then I landed in a really interesting space in this wilderness therapy program. That was kind of like an island out in the middle of the woods and we didn't get bothered very much. We got our funding and we got our clients kind of shipped to us and we got a lot of license to experiment with our practice. One of the experiments that we undertook was addressing the lack of data, the lack of feedback, real-time information from clients about their experience and treatment, so that we could adjust it and make it better. 

And that's when I really fell in love with open evaluation because I saw the impact that it could have on a client’s journey, when you can take something as hard to measure sometimes as addiction or mental health and quantify it and make it visible for people. It was hugely powerful for the clients, and powerful for the frontline staff. 

One of the big problems with evaluation in this space is that it's not designed for clients or staff. It's designed for funders. It's designed for the marketing campaign. It's designed for the annual reports, those types of things which are important, but not meaningful to direct service providers. In a nutshell, being an outsider in the space that ended up being gave me a fresh set of eyes to look at it and say: we can measure some of these things. And if we do it this way, it'll have a significant impact. And so then I got into leadership positions, and I had the authority to just go ahead and do it. So I just went ahead and did it. And we really improved our practice, to the point where it became one of North America's leading wilderness addictions treatment programs, which is something I am very proud of. 

 

I love how you talk about client-centered evaluation. So much of people’s hesitation is that evaluation feels like an obligation to an external third party. Whereas if we do this right, it can mean that we are having a greater impact in the world.

The first problem that I see in the sector is the mindset that we have around outcomes and evaluation. It's something that is often done to us not done by us and not done with us. It's something that is imposed by or might even be a grad student doing their thesis research, or coming in with their questions set and their things that they want to know. If we can take ownership and take control of it and orient it towards direct service. It changes that entire mindset. 

The second problem is the lack of meaningful outcomes and integrated systems. We're measuring either the wrong things or we're measuring other people's priorities. We're measuring funders priorities, and we're not incorporating that data. Like I said, it ends up in the annual report, and by the time the annual report gets generated, the data do not have an impact on the day to day. If we don't get feedback on a routine enough basis, make it meaningful, make a difference in the lives of the staff and the clients then we're missing a huge opportunity to actually utilize that data. 

 

How do we build the tools to be able to get to the heart of what it is we're trying to do and measure and improve on?

That’s the bulk of the work that's involved with teams of practitioners and whoever is involved in this process. It involves a robust conversation about what I call a program architecture, which is basically it's a one pager or a flowchart that outlines your vision, the problem that you're trying to solve in the world, what your values are, how you want this to be experienced by your clients, what your practices are, and making sure that there's alignment and congruence  between all of those pieces.We'll say participant-centered or client-centered, but when we actually look at our practice, it's a little bit more program-centered or a little bit more staff-centered or a little more funder-centered. And so it's about getting really clear on vision, mission and values. And then out of that, I find that you can naturally find out the kind of impact and changes that teams want to make.

 

Where should we start when we start considering what tools to use for evaluation?

Everybody has had bad experiences of having a tool given to them or mandated to them that was misaligned or didn't measure what they wanted. The solution here is a combination of routine outcome monitoring and utilization focused evaluation. That's a framework by Michael Quinn patent around evaluation for the sole intention of being useful for the people being served nothing else like full stop, is this useful or not useful? 

With those two kinds of frameworks, the routine outcome monitoring is where I generally nudge organizations to pick an outcome tool that's in the literature, something that has had some research done on it with the clientele that you're working with and has been validated. How we choose to use the tools matters as much as the tool itself, and so to make sure that we're applying it properly, not just using it to extract data from the client. 

And then with the utilization focused evaluation, what we end up doing is usually building custom program evaluation questionnaires that really focus on the practices that are kind of most relevant for that. Usually the problem for most nonprofits is that there are many different factors, and we don't know which one is most meaningful or at least meaningful. Ultimately, the goal of all of this is to get to a place where we can make better decisions, where we can make our programs more efficient and more effective. 

 

We have been talking about how important it is to have client-centred evaluation.  But I'm a fundraiser. So I'm going to ask, even though it's not for our funders, and it's certainly not for external people, what use can we get from this information and turn it into something that is more external facing?

For sure. And not to say I think the primary focus isn't for funding. Because funding is a place that reflects success and meaningful changes. If you just throw data at people, it doesn't make any sense that they can't contextualize it and you risk it being misinterpreted. But data plus story is a really powerful combination for donors. So an example of that would be being able to quantify some of the impacts that the ultimate evaluation project has had. If you're a funder, and you can clearly tell from data that an organization you’ve funded just doubled the number of impacted population in a year, that's pretty powerful!

 

Sometimes progress is not linear. It’s messy. How do you guide your clients as they are closely monitoring their outcome but they are not seeing immediately progress?

That’s a really important mindset shift that informs how we approach work and our staff are going to work.  If we're routinely getting feedback, measuring and monitoring, we have to be okay with bad outcomes. In the short term, we have to be okay with people who come into treatment and get worse in the first couple of weeks or don't improve at all. That's happening anyway, regardless of whether we're measuring it. In fact, we were measuring it shouldn't be this huge shock, like it usually just confirms what we already know. 

But there can be a real aversion to the realization that we may not be as effective as we think we are, or this piece of the program may not be as meaningful as we thought. So there's a lot of ownership and ego that we have to set aside and say it doesn't matter what our intention is with this program. What matters is what the impact is. And now that we're getting feedback about the impact. And that's where returning data back to the team in the kind of frequency that makes sense for them on their journey is really important. Then for leadership to return it back to the staff there. We developed a monthly and quarterly rhythm where we would return data, and so it's a month to month thing where we can actually adjust programming and staff would take the feedback and make adjustments for the next month that was actually impacting a client's journey while they're still there. 

 

There's so much value to what you're doing. We often get inertia when we think about evaluation. You've positioned this conversation that, even if we don't take a step back and look at the metadata and look at all the big trends, we can use evaluation as a tool to make sure our work is impactful. 

Yeah. I am going to share with you what I think are the three most essential ingredients of building really intentional and powerful organizations. First is for everybody to have a purpose. Thinking about purpose as the problem that we want to solve. We want to face something that unites our community of practitioners around something that's really meaningful. I think a lot of vision and purpose statements live on walls, and aren't necessarily reflected fully in practice. So the second piece is how we frame that central problem for our staff to align with. We have to frame the problem so that it ignites a passion from staff to pursue excellence across the board and connectable practice to vision and mission.  The third and final ingredient is just feedback. It's just what we've been talking about. It's the ability to make sure that our impact lines up again, with intentions with purposes and with values. So those three ingredients, if you have those in your organization, you will do exceptional work. And I've seen organizations jump to the top 1% of organizations in their field, doing this kind of work with very just intentional conversations with their teams and operationalizing the things they already care about, right? These are already the things that drive us out of bed in the morning, to get to work when we can quantify them and when we can bring them back into our practice and center them on remarkable things. 

 

Resources from this Episode

CharityVillage

The Doorway

Utilization-Focused Evaluation - Michael Quinn Patton

The National Registry of Evidence-Based Practice for Addictions and Mental Health

Google Dashboard

 

The Small Nonprofit is produced by Eloisa Jane Mariano

 
Maria

Maria leads the Further Together team. Maria came to Canada as a refugee at an early age. After being assisted by many charities, Maria devoted herself to working in non-profit.

Maria has over a decade of fundraising experience. She is a sought-after speaker on issues related to innovative stewardship, building relationships, and Community-Centric Fundraising. She has spoken at AFP ICON and Congress, for Imagine Canada, APRA, Xlerate, MNA, and more. She has been published nationally, and was a finalist for the national 2022 Charity Village Best Individual Fundraiser Award. Maria also hosts The Small Nonprofit podcast and sits on the Board of Living Wage Canada.

https://www.linkedin.com/in/mariario/
Previous
Previous

See You On The Internet with Avery Swartz

Next
Next

Slay The Mic with Jam Gamble