What Funders Miss When They Measure Completion Rates Instead of Confidence

The Metric That Lies

Every year, councils, combined authorities and other organisations fund training programmes and measure success using completion rates. 87% completion. 92% completion. 100% completion. These numbers look good in reports. They're also nearly meaningless.

A completion rate tells you that people showed up and finished. It doesn't tell you anything about whether their lives changed. It's possible to have a 95% completion rate and zero employment outcomes. It's possible to train 100 people and have none of them use what they learned.

But completion rates are easy to measure, they look impressive, and they don't require follow-up. So funders keep funding based on them.

This is costing councils real outcomes. Because training providers optimise toward what's measured. If completion rates are all that matter, you get programmes designed for easy completion, not for difficult, necessary change.

Why Completion Rates Are the Wrong Metric

Here's what completion rates actually measure, the trainer's ability to keep people in a room. That's it. It measures retention, not impact. And optimising for retention produces perverse outcomes.

A training provider can achieve 99% completion by making the course easy, letting people coast through with minimal effort, and requiring minimal actual learning. Participants leave with certificates but without confidence. No one gets employed. But completion rates are sky-high.

Alternatively, a trainer can push people through difficulty, which builds real confidence and changes outcomes but some people drop out because it's genuinely hard. Completion rate drops to 85%. But of the people who finish, 60% are employed within 6 months. Which is better?

Any funder focused on completion rates would choose the first option. Any funder focused on real outcomes would choose the second.

This is why completion rates exist as a metric, they make life easy for training providers and councils. They don't make life better for the people you're trying to help.

What Actually Matters: Confidence and Capability

If you want to know whether training worked, measure confidence.

Not self-reported confidence (which can be inflated). Real indicators of confidence:

  • Do they apply for jobs they wouldn't have applied for before? (For unemployed participants)

  • Are they actively using the skills they learned? (For self-employed participants)

  • Do they problem-solve when they hit obstacles, or do they assume they can't?

  • Are they willing to try new things related to their skills?

These are harder to measure than completion rates. They require follow-up. They require honest reporting. But they actually tell you whether training changed someone's belief in themselves.

The 6-Month Outcome Measure

This is why we track employment and business revenue at 6 months. Not because it's convenient. Because it answers the actual question: did this training change this person's life?

For unemployed participants:

  • Are they employed?

  • How long did it take to find employment?

  • Are they in jobs related to what they learned, or did the confidence from completing training help them succeed in any job?

For self-employed participants:

  • Has revenue increased?

  • Are they acquiring customers through the channels they learned about?

  • Have they hired additional people or expanded their business?

These metrics require honesty. Some people won't be employed at six months. Some businesses won't have grown. But you get a real picture of impact, not a manufactured story.

Why Councils Should Demand Better Metrics

You're investing taxpayer money. You have a responsibility to know whether that investment is working.

If a training provider can't or won't track outcomes at 6 months, there's a reason. Usually it's because the outcomes aren't strong enough to justify the investment.

Demand better:

  • Refuse to fund based on completion rates alone. It's not an outcome.

  • Require employment/business revenue tracking at 6 months. That's an outcome.

  • Ask how confidence is measured or assessed. If they don't have a method, they're not intentionally building it.

  • Demand honesty about who succeeded and who didn't. Real impact includes real numbers, not inflated claims.

The training providers worth funding are the ones who can honestly tell you: "Of 20 people who started, 18 completed. Of those 18, 14 were employed within 6 months in roles where they're using digital skills. Three were employed in unrelated roles but credit the confidence from training. One is still looking. Revenue for self-employed participants increased by an average of X%.

That's a real story. That justifies investment.

What It Means for Confidence-Building Training

Here's a critical insight, confidence-building training often has lower completion rates than content-delivery training.

Why? Because we actually push people through difficulty. We don't let people coast. We require them to struggle, problem-solve, and build real capability. Some people find this too hard and drop out. This is why completion rates are a terrible metric for evaluating training. They reward easy programmes and penalise challenging ones. But the challenging ones are the ones that actually work.

If you fund based on completion rates, you'll get easy training with poor outcomes. If you fund based on real outcomes, you'll get training that's appropriately challenging and actually changes lives.

Previous
Previous

Building Routine, Building Belonging. Why In-Person Training Creates Real Change

Next
Next

The 5-Week Model. Why Confidence-Building Needs Time (But Not Too Much)