Why Program Logic Matters for Evaluation: Clarifying What Your Program Is Really Trying to Change

February 2026

Picture of the author

About the author: Stephanie Spencer is an Evaluation Associate with Three Hive Consulting and Eval Academy Coordinator. As a Credentialed Evaluator, she brings experience in mixed-methods evaluation across health, social service, and community sectors, with a focus on qualitative methods, reflective practice, and trauma-informed approaches.




This article is rated as:

In our work at Three Hive Consulting, we support organizations to develop or revisit their logic models as part of evaluation planning. But what we find clients are really asking for is clarity: clarity about what their program or intervention is trying to achieve, what changes it is reasonably expected to contribute to, and how success should be understood and measured.


We’ve found that many programs can clearly describe what they do. The challenge comes when articulating what those activities are intended to change and why.

In many logic models, we see that activities are listed. Outputs are counted. Outcomes are named. However, naming outcomes does not always mean that there is shared clarity about what those outcomes actually look like in practice, how change is expected to occur, or how success will be recognized.

When we ask questions such as What does success actually look like? What specific change is reasonably expected to result from these activities? What would tell us if this didn’t work? Teams are often able to point to outcome labels, but struggle to describe the nature, depth, or boundaries of the change those outcomes represent.

As a result, logic models can give the appearance of clarity while leaving key aspects of program intent implicit. This lack of specificity makes it harder to design meaningful evaluations, select appropriate indicators, and interpret findings when results are mixed or do not unfold as expected.

This article focuses on the thinking behind program logic, before worrying about how to represent it in a diagram. Instead of starting by fitting activities and outcomes into a one-page logic model, the emphasis is on clarifying what change the program is trying to contribute to, why that change is expected, and where the program’s responsibility reasonably begins and ends.

For a refresher on the difference between logic models and other frameworks, take a look at our article: Differences between theory of change, log frames, results frameworks and logic models – what are they and when to use them.


Logic is about intent, not just sequence

Logic models are often treated as linear checklists: InputsActivitiesOutputsOutcomes

While this structure is useful, it can unintentionally shift attention away from the most important question:


What is the program actually trying to change, and for whom?

Defining program logic means clearly articulating:

  • The specific change the program is designed to contribute to

  • The expected link between activities and outcomes, within the program’s scope

  • The key assumptions and boundaries that shape what the program can reasonably influence

Without this clarity, a logic model becomes a description of work rather than a tool for understanding and evaluating program intent. When intent is unclear, teams often default to measuring what is easiest to count rather than what is most meaningful to understand. As a result, when developing a logic model, we suggest that you:


Start with the intended change, not the activities

One of the most common pitfalls in logic model development is starting with what the program does, rather than what it intends to achieve.

Programs evolve over time. Activities shift. Delivery methods adapt. But the purpose of the program, its intended contribution, should remain relatively stable.

A useful test question is:


If this program were successful, what would be different, and for whom?

 Strong outcome statements:

  •  Describe change, not effort

  • Focus on people, systems, or conditions, not services

  • Are specific enough to guide evaluation, but flexible enough to allow adaptation

For example:

  • Participants attended workshops” describes an activity

  • Participants’ confidence in navigating services increased” describes an intended change

Starting with the intended change also lays the foundation for meaningful measurement. When outcomes are clearly articulated, it becomes easier to develop appropriate indicators, assess whether change is observable within the evaluation timeframe, and avoid relying on activity counts as stand-ins for impact (more on this below). Defining outcomes this way helps ensure that the logic model reflects why the program exists, not just how it operates.


Make assumptions visible (especially the uncomfortable ones)

Every program rests on assumptions, whether or not they are acknowledged.

Some common assumptions we see in evaluations are:

  • If people receive information, they will use it

  • If services are available, people will access them

  • If staff are trained, practice will change

When assumptions are not made explicit, evaluations can struggle to explain mixed or unexpected results. If outcomes are not achieved, teams may be left without a clear reference point for interpreting what happened, making it harder to assess whether the issue lies in program design, assumptions about change, or contextual influences.

By explicitly naming assumptions, logic models become more than planning tools; they become testable representations of program intent. This supports stronger evaluation by helping teams identify what evidence is needed, where questions should probe more deeply, and how to interpret mixed or unexpected findings.

To read more about assumptions, take a look at our article “The Importance of Articulating Assumptions”.


Define the boundaries of program responsibility

Another common challenge in logic model development is over-claiming outcomes.

Programs often operate within complex systems, alongside many other actors and influences. Clear program logic helps distinguish between:

  • What the program directly controls

  • What the program plausibly contributes to

  • What lies beyond the program’s reasonable scope

Being explicit about contribution, not attribution, protects programs from unrealistic expectations and supports more credible evaluation findings.

A well-defined logic model helps answer:

  •  Where does the program’s influence reasonably end?

  • What changes require other actors, systems, or conditions to align?

  • Which outcomes are aspirational versus evaluable?

Clarifying boundaries early also helps prevent evaluations from being asked to answer questions the program was never designed to influence, strengthening both methodological credibility and relationships with funders and partners.


Use logic to guide evaluation choices

A clearly defined program logic does more than support reporting; it actively shapes evaluation design and makes evaluation more efficient, focused, and useful.

When program logic is explicit, evaluators and program teams can:

  •  Prioritize outcomes that are most central to the program’s intent: Not every outcome in a logic model needs to be evaluated at the same depth. Clear logic helps identify which outcomes are core to the program’s purpose, which are supportive, and which are longer-term or aspirational. This allows the evaluation to focus on learning and accountability.

  • Develop meaningful indicators that reflect the intended change: Well-defined outcomes make it easier to articulate what success would look like in observable terms. Logic models help translate outcomes into indicators by clarifying who is expected to change, what is expected to change, and in what way. This reduces the risk of relying on activity-based or proxy measures that don’t meaningfully reflect program intent.

  • Select data collection methods that align with the type of change expected: Different outcomes require different kinds of evidence. Logic models can help evaluators determine when quantitative measures are appropriate, when qualitative insight is needed to understand experience or context, and when mixed methods are most useful. This alignment strengthens the credibility and usefulness of findings.

  • Identify where data are already being collected, and where gaps exist: Mapping outcomes and indicators against existing data sources (e.g., administrative data, monitoring systems, surveys, or routine reporting) helps teams see what evidence is already available and what additional data may be needed. This supports more realistic evaluation designs and avoids unnecessary data collection.

  • Recognize early signals of progress before long-term outcomes emerge: Logic models help identify interim or short-term outcomes that can serve as early indicators of progress, especially when longer-term change may take years to materialize. These signals are key for learning, course correction, and communicating progress to interest holders and funders.

In this way, logic becomes a practical decision-making tool, guiding what to measure, how to measure it, and where to focus evaluation effort, rather than just a documentation exercise completed at the start of a project.


Use logic models as living “thinking” tools

It is tempting to treat logic models as static deliverables, created once, approved, and archived.

In practice, the most useful logic models are revisited regularly. They evolve as:

  • Programs adapt to changing contexts

  • New insights emerge from implementation

  • Assumptions are tested and refined

Revisiting program logic creates space for learning, not just accountability. It allows teams to ask:

  • Is this still what we are trying to achieve?

  • Are these still the right activities and outcomes to get there?

  • What have we learned that should change our logic?


Focus on clarity before complexity

Strong program logic does not require complex diagrams or extensive technical language. It requires clarity, discipline, and honest reflection.

Before refining visuals or selecting indicators, ask:

  • Do we clearly understand the change we intend to support?

  • Can we explain why our activities should lead to that change?

  • Have we been explicit about our assumptions and limits?

When a program’s logic is well defined, everything that follows, including evaluation questions, data collection, reporting, and learning, becomes more meaningful.


We’d love to hear how you’re using logic models as thinking tools in your work. Connect with us on LinkedIn or sign up for our newsletter for more practical evaluation insights.


Previous
Previous

New Resource: Pre‑Interview Risk Assessment Checklist

Next
Next

New Resource: 10 Evaluative Thinking Questions