The Art of Writing Evaluation Questions

This article is rated as:

 
 

It seems so simple – just ask a question! But many new evaluators or side-of-desk evaluators struggle with confidence in creating good evaluation questions. Here are a few tips to accompany some content we’ve already shared (How To Write Good Evaluation Questions; Evaluation Question Examples by Type of Evaluation; Evaluation Question Examples). Then, I’ll share an example of how I go from a client meeting to drafting evaluation questions.


Structure Tips

Evaluation questions often have similar lead-ins, that is, that starting part of the question. Evaluation questions are intended to elicit a narrative response, not a simple yes/no. Therefore, the question (usually) shouldn’t be “Are clients satisfied?” or “Did the program reach the target?” These can be answered in one word: yes or no. Usually, evaluation questions are open-ended questions that leave room for context, exploration, or explanation.

Try starting your evaluation questions with:

  • “To what extent….”

This one is a favourite. Starting your question with “To what extent” leaves room for a range of responses. Often it addresses program effectiveness. The end of that question could be an outcomes statement, e.g., To what extent did the program provide equitable access to housing services?

  •  “Why….”

Why questions can help a program to understand the results they are getting. It can explore processes that usually aren’t documented, e.g., Why are clients choosing this program over that program?

  • “How….”

How questions are excellent for process or formative evaluation. How questions help a program to understand what works in what context and can identify enablers or barriers, e.g., How do clients learn about our services?

  •  “In what ways.…”

In what ways questions can be used when there is a specific feature that you want to explore, e.g., In what ways did self-referral impact program outcomes?

  •  “What..

    • ..contribution”

    • ..impact”

    • ..factors”

What questions can also explore specific features, e.g., What impact did the email campaign have on client access?

Or, what questions can help to identify barriers or enablers, e.g., What factors contribute to client success rates?


What about Who?

In my opinion, questions that start with “Who” are rarely evaluation questions. Having said that, I often include them in my evaluation plan. I do this because it clearly and transparently shows clients that in addition to answering their key evaluation questions, I’ll also provide descriptions or profiles of who is accessing their service. Where ethical and possible, I’ll use the “who” information to further explore answers to the evaluation questions, e.g., How did satisfaction vary by demographics?

Sometimes I’ve seen these referred to as “Descriptive Evaluation Questions”, and I think they’re important.


Client Requests

Despite my argument earlier that yes/no questions are generally not great evaluation questions, sometimes I do include them in my evaluation plan, often as sub-questions under a key evaluation question. I do this to show clients that I intend to answer their burning questions, which may be yes/no.

For example:

A key evaluation question may be:

“To what extent were clients satisfied with the service?”

And then underneath that, I’ll include:

“Did the program reach at least 80% of clients satisfied?”

You’ll see what I mean about a key evaluation question and sub-questions in my example below.

However, I have found that with a little wordsmithing magic many yes/no questions can be made stronger using the lead-ins described above.

For example:

Did the program stay within budget? -> How well did the program align with the budget?

        Are clients satisfied with the program? -> To what extent are clients satisfied with the program?

Did changing the intake process impact outcomes? -> In what ways did changing the intake process impact outcomes?


From client meetings to evaluation questions

Now that we have some structure around evaluation questions, let me share a real example. At the start of each new client contract, I hold a kick-off meeting (here are some great resources for that: Evaluation Kick-Off Meeting Agenda (Template); How to Kick Off Your Evaluation Kick-Off Meeting. A primary goal of that meeting is to get me started on drafting evaluation questions.

You can try asking your clients “what are your evaluation questions” directly, but in my experience, you’ll be left with blank stares. Clients (usually) aren’t evaluators. So instead of the direct pathway, I facilitate discussions about what questions they have about their program, what they want to learn, and what they (might) do with those learnings. Evaluation questions will often flow from this understanding.

In this example, I was working with a new client to evaluate outcomes of a mental health program for youth. After a quick orientation to evaluation that we call “The Fastest Evaluation 101 Ever”, where I frame evaluation questions as providing the roadmap for where the evaluation goes, I move into a discussion where I transition from capacity builder to listener.

Before the meeting I figure out what I need to know, and therefore what questions I’ll ask to generate discussion; determining this is tailored to each client and depends on what I already know, which is usually from documents they’ve already shared with me.  Here are some guiding questions I use that enable me to hear from the client:

1.     Why am I here? What do you want evaluated and why? What are your expectations?

This drives at purpose and scope. Sometimes I link this discussion to content in my evaluation orientation. This is why that orientation (The Fastest Evaluation 101 Ever) is so useful; now they know that evaluations can be used to make judgments, to learn, to grow and expand, or to monitor (to name a few), and they can use that knowledge to describe the current evaluation. As a group, we spend a fair bit of time understanding why they’re evaluating, and why now.

 In this youth mental health program example, the notes I took said:

  • They want to explore their flow of service and better understand effectiveness and quality

  • They want to know what’s going well, and what gaps and challenges they have

  • They want to learn and act on those learnings

  • They want to know about access, quality, and impact

  • They want to understand the client trajectory or journey

2.     What questions do you have about this program? What decisions do you make and what informs those decisions?

This section is pretty clearly about evaluation questions but also looks at gaps/opportunities and how they may act on them. Sometimes you’ll find it’s hard for your client to think of questions per se, so asking what decisions they make day-to-day, or monthly, or quarterly and then following up with “what evidence or data do you use to inform those decisions”, will help move things along.

 In this example, the notes I took said:

  • What gaps do we have in terms of who accesses the service?

  • Are there gaps by certain populations (e.g., socioeconomic status) or certain regions of the province?

  • Who are we serving? And who are we not serving?

  • What are the barriers to access?

3.     How do you define success for this program?

This question gets at outcomes and program effectiveness, which are usually closely related to evaluation questions. This also starts to give you an idea of how you may measure (and answer) those evaluation questions.

 In this example, the notes I took said:

  • Self-assessment scores improve pre to post

  • Evidence of fidelity to the model

  • Families report success

  • Improved mental health

  • Demand for services is high

 If a client is particularly interested in outcomes, I’ll ask: What does your program make better? What changes do you expect to see? I didn’t need to do that in this example, though.


From those notes, I drafted only three key evaluation questions. I really liked how one of the staff talked about three domains: Access, Quality, and Impact, so I mirrored that thinking back to them in the development of the questions.

1.     To what extent are we serving the families who need our services? (ACCESS)

Under this key question, I had 9 more specific questions. Here are some highlights:

a.     Who was referred to our program? (Notice the inclusion of a descriptive evaluation question!)

b.     How convenient were services to access?

c.      To what extent do program participants use what they learn in other scenarios?

2.     What is the experience of participating in the program? (QUALITY)

Under this question, I had 5 more specific questions. Here are some highlights:

a.     To what extent do families report a positive experience with the program?

b.     What supports enable program implementation?    

3.     What impact does the program have on participating families? (OUTCOMES)

Under this, I had 6 more specific questions. Here are some highlights:

a.     What impact does this program have on the mental health of youth?

b.     What impact does this program have on families?

c.      To what extent are outcomes sustained?

In this case, categorizing the evaluation questions added a lot of clarity, and actually helped me to structure the final report as well.


Sometimes, clients will say they’re interested in a logic model, or theory of change, or, as mentioned above, in developing new outcome statements. In those cases, the development of those products will inform and guide your evaluation questions.

Evaluation questions can also be tied to a framework. I’ve used RE-AIM or Proctor and built evaluation questions mapped onto specific domains (which is pretty similar to the categorization of evaluation questions in the example shared above).

Sometimes the clients have shared enough documentation with me before the kick-off meeting that I could show them some example evaluation questions and assess their reaction/reception. I really like this approach and use it whenever I can. I find that oftentimes reacting to something is easier and more efficient than starting from scratch – you’ll know right away if you’re on- or off-track.


I hope this helps you in your process to create evaluation questions. Use our evaluation question checklist for some additional considerations.