The Top 5 Cognitive Biases In Evaluation (And What To Do About Them)

October 2025

This article is rated as:

 
 

Evaluators are human. Being human means we’re shaped by our perceptions, beliefs, and experiences. And that we are susceptible to bias. Our brains like to think fast, that is, low effort, automatic, intuitively. It also means that it is prone to errors. The built-in shortcuts our brains use to make sense of the world are called cognitive biases. While these mental habits can help us get through the day, they can also sneak into our evaluation work in ways that matter.

I've long been interested in the unconscious patterns that shape behaviour, which is why I studied psychology, enjoy my favourite podcast, and have written about biases before: Beyond Biases: Insights For Effective Evaluation Reporting.

I know that I struggle with being aware of what my biases are, or even where to look for potential bias, so here are five of the most common cognitive biases in evaluation—plus some strategies you can use to keep your work clear and credible.


1. Confirmation Bias

What is it?

Confirmation bias is our brain’s natural tendency to notice and remember information that fits what we already believe—and to brush aside anything that doesn’t. We like things that confirm what we already know and believe. Have you ever read a horoscope and been amazed at how well it knows you? That’s confirmation bias. Next time you read the horoscopes, read one that is not yours and see if you can see yourself in that description too.

How does it show up in evaluation?

If you’ve already got a hunch about a program’s success (or failure), you might pay more attention to evidence that supports your viewpoint—or shape your questions so you get the answers you expect. Sometimes, even the way we code or summarize data can nudge us toward what we “knew” all along.

Confirmation bias can keep evaluators stuck in old patterns, missing out on important lessons or opportunities for growth. Even worse, it can lead to misrepresentation of data and findings to clients or program leads.

What can you do?

  • Use self-reflection to be aware of your assumptions.

  • Actively search for data that challenge your assumptions.

  • Work within diverse teams who bring different perspectives, but importantly, make sure there is an opportunity for the diverse perspectives to be heard.

  • Test some of your implicit biases that may lead to a confirmation bias


2. Anchoring Bias

What is it?

Anchoring is when first impressions or early data “set the tone” for everything that follows—even if that starting point isn’t the most reliable.

Let’s say you’re shopping for something online, and the first website you go to says the item is $100. Whether or not that price is actually high (or low or average) for that item, it’s sets the benchmark for all your subsequent searches. Other prices for similar items will, consciously or subconsciously, be compared to that first $100 anchor, regardless of its meaning.

How does it show up in evaluation?

Think of that first focus group or those early survey results: If they’re especially positive (or negative), it’s easy to hang onto that anchor and let it colour the rest of your analysis. Sometimes, an early conversation with a key project partner can set an expectation you can’t shake.

Anchoring can lead to tunnel vision, making it harder to see the full picture or adjust when new information comes in. In fact, anchoring bias can lead right into confirmation bias!

What can you do?

  • Try to hold off on primary conclusions until you’ve gathered all your data.

  • Revisit your early impressions regularly as you learn more.

  • Invite colleagues to review your work with fresh eyes.


3. Availability Bias

What is it?

Availability bias means giving extra weight to information that’s easy to recall—usually because it’s recent, dramatic, or emotionally charged.

If you’ve tried a new restaurant and the table next to you is loud and obnoxious, you are more likely to form a negative opinion of that restaurant, because the loud neighbouring table was the most salient part of your experience there. The negative experience is the thing you remember most about that restaurant.

How does it show up in evaluation?

If you’ve just heard a powerful story in an interview, it might overshadow more common themes in your dataset because it is memorable. Or a recent event could start to feel like “the norm,” even when it’s actually the exception. Why something may be memorable to us could be for any number of reasons and may even have to do with another bias (like recency bias)!

Availability bias can lead us to overemphasize memorable anecdotes, which may skew findings and recommendations away from the full story.

What can you do?

  • Triangulate data! Aim to get multiple perspectives from multiple methods about the same topic.

  • Balance stories and statistics, or better yet, use them together.

  • Check in with your team about what stands out—and what’s actually representative.

  • Build in time to talk through possible blind spots together.


4. Overconfidence Bias

What is it?

Overconfidence bias is when we lean a little too hard on our own expertise, sometimes ignoring uncertainty or alternative views.

When was the last time you bought assemble-at-home furniture? Did you read the instructions first? I’m guessing many of you didn’t. Most of us have assembled furniture, and it’s usually not that hard. But it can lead to mistakes – perhaps you put the left side on the right, or skipped a step that forced you to go back and undo your work. Overconfidence got you!

How does it show up in evaluation?

Have you ever caught yourself thinking, “I know this sector—I’ve got this!”? That’s when overconfidence can creep in. It shows up as glossing over limitations in your methods, being quick to dismiss new evidence, or assuming your interpretation must be right.

Overconfidence can mean missing important nuances or failing to signal areas where a recommendation is less certain.

I know it’s probably more common to talk about Imposter Syndrome, where we never feel fully qualified to do the job we do. However, overconfidence bias is also a risk, potentially resulting in complacency and limiting consideration of the most effective solutions.

What can you do?

  • Try to think of your proposals or work from your client’s perspective. I try to ask myself, “What isn’t clear or obvious, and how can I explain this better or differently?”. By forcing myself to take a new perspective, I’ll often uncover things I hadn’t thought of before.

  • Be upfront about the limits of your findings—evaluation users appreciate honesty.

  • Ask for peer reviews or outside feedback whenever you can.


5. Groupthink

What is it?

Groupthink happens when teams value harmony or consensus so much that they stop voicing concerns or alternative ideas.

You and a group of friends are going out to eat. Someone suggests a new restaurant, and everyone agrees. After the meal, you learn that the restaurant wasn’t accommodating to everyone’s dietary needs or preferences, but no one spoke up during selection because of groupthink.

How does it show up in evaluation?

If your team is nodding along and no one’s raising questions, it could be a sign that groupthink is at play. This is especially common under tight timelines or when there’s a strong leader in the room.

Groupthink can stifle creativity and sweep important risks or innovative solutions under the rug. Worse, it can waste time and budget as you make progress on a plan that had red flags that weren’t raised.

What can you do?

  • Invite all voices to the table and make space for disagreement.

  • Build anonymous feedback opportunities into your process.

  • Bring in an external reviewer to challenge assumptions.

  • Be aware of and mitigate power dynamics in group meetings.

  • Lead by example. Sometimes I’ll point out flaws in my own planning: “I wrote this evaluation question, but it’s not sitting right with me; I’m not loving that I used [this word]. Can you help me figure out how to improve it?” Leading by example can give permission to decision-makers to also ask for help or actively seek feedback that may not have been volunteered.


Cognitive biases are just part of being human. But when we check these biases, our evaluation work gets stronger, more transparent, and more useful for everyone involved. By building in habits of reflection, diverse perspectives, and honest critique, we can deliver evaluations that inform, inspire, and help our organizations grow. Exploring our biases is another way to flex our evaluative thinking muscles.


Previous
Previous

Evaluability Assessments (EAs): When To Do Them, And When Not To

Next
Next

Two People Walk Into An Interview… An Intro To Dyadic Interviews For Evaluators