Eval Academy

View Original

Evaluating for Spread and Scale

This article is rated as:

These two words go together so nicely, “spread and scale”; they kind of roll off the tongue. I wonder how many of us would struggle to define or, perhaps more importantly, distinguish the two.

Earlier in my career, I was part of a team looking to publish an article about a quality improvement initiative. Throughout the article, I discussed measurement and evaluation for “spread and scale”. One of the journal reviewers challenged me, “Do you actually mean both spread and scale? How are you measuring for each of these?” It was then that I realized I hadn’t put much thought into what this almost-one-word phrase “spreadandscale” actually meant!

I think this is relevant to evaluation because many clients are interested in evaluating for spread OR scale, OR both. It’s important that we understand the differences and how to support evaluation to guide their decision-making and actions.

So, let’s start with definitions.


Spread is to replicate in a different location. Think horizontal flow. You may pilot a new program at one site and, upon its success, spread it to your other sites, but the implementation of that program is essentially the same.

Example: One ward in a hospital trials a new process for patient care. After its success, another ward in the hospital implements the new process. The process has spread to another department; it is being implemented in a new location, and likely with a new population, but the implementation is the same.


Scale is to build the infrastructure for implementation at a new higher level. Think vertical flow. If you want to implement a new process across an entire system, you would need to embed new policies, build training opportunities, set accountabilities, etc.

Example: A healthcare system wants to use a new record system; they need to determine what hardware and software procurement is required and how to train the staff (possibly a train the trainer model) and update all policies related to record keeping.


It’s easy to get confused. It’s possible for a program to employ both spread and scale.

Example: A healthcare system pilots a new system in one hospital. After its success they spread to all hospitals and develop accompanying policies and protocols to embed the new system across the entire health system, thereby scaling the pilot.

To further add to our confusion, spread and scale DO have similarities:

  • both are expansions

  • both often follow a pilot or trial

  • both are important for quality improvement efforts

  • both can be the reason for an evaluation!

The key, for me, is to look for that policy change or system-level change that is foundational to scale. Is the program intending to embed the program into its new way of working across all sites, or are they spreading to a few sites where they think this might also be a value-add?

So, what does all this mean for evaluation?


If you’re evaluating a program that intends to spread:

  • Focus on fidelity. Review the implementation plan and then evaluate what actually happened. Understanding the variance will help this program spread successfully. Questions to ask may include:

    • Was the program implemented as intended (Pro tip: the RE-AIM framework might come in handy here!)

    • What worked well, and what didn’t?

    • How did the context/environment play a role?

  • Identify what changes or adaptations were required for implementation. Understanding necessary changes develops a sort of pre-requisite list that can be used to determine if implementation in other sites or with other populations is likely to be successful. Implementation Science is likely your friend here, identifying key domains for implementation, including intervention characteristics, communication processes, readiness for change, planning and execution. Some questions to ask may include:

    • What staff/human resources are required?

    • How did responsibilities/accountabilities change?

    • What are the key barriers or enablers for implementation?

  • Of course, program effectiveness is still important. There’s a good chance an evaluation is being completed to determine if the program should spread. In that case, key outcome evaluation questions are very relevant:

    • To what extent is the program achieving what was intended?


If you’re evaluating a program that intends to scale:

  • Identify policy and system-level changes. Scaling a program requires that changes be embedded at the system level. These could be fiscal/financial enablers, network/relationship enablers, or environmental enablers. Identifying these changes, or at least the plan for these changes will help to determine if scale is likely to be effective.

  • Identify accountability structures and staff capacity. Scaling a program may require new roles, new training, new management structures or even entire new teams to oversee the program. Identifying these changes, or at least the plan for these changes will help to determine if scale is likely to be effective.

  • Effectiveness: is the program achieving what it intended? As with spread, determining program effectiveness is still a foundation for deciding whether the program should be scaled in the first place.


In my experience, most program evaluations or pilots are looking for spread; they want to make a small investment to test a new program or process with a smaller group to determine if it should eventually be spread to other sites or departments. An evaluation for spread may involve a formative assessment (how is the pilot implementation going?), an outcome evaluation of a pilot (what was achieved), and maybe even an evaluation of the spread itself. There are lots of opportunities for an evaluation!


Many validated tools can help you assess readiness for spread. It’s not a literature base I’m very familiar with. If you have a favourite, share it with me in the comments below!