Five Ways to Make Performance Measurement and Evaluation Count
In the public sector, performance measurement and evaluation (PME) isn’t the end-point of the policy and program development cycle. It’s where we take a clear, critical look back at what we’ve done and plan our way forward. Public sector innovators agree that governments need to get better at accounting for what they do and what they cause – at connecting efforts to outputs to outcomes.
In this article, we will discuss the importance of being aware of what you’re measuring, and we will identify five ways to make PME count.
What are you measuring?
Magical thinking is the belief that we can get the results we want without putting in the work. Magical thinking during policy formulation or performance measurement may lead us perceive causality where there is none. Poorly chosen measures of program performance may validate magical thinking and give false-positives – seeing success where there is none. Magical thinking wastes money and squanders opportunities.
Bright shiny objects are attractive public policy instruments. Sometimes they catch the eye because a decision-maker finds them appealing. Sometimes they’ve been sold to a decision-maker as a solution to every problem, and sometimes they just line up with a decision maker’s biases. The problem with bright shiny objects is that their brightness and shine are poor substitutes for evidence and analysis. This can lead to wishful or “magical” thinking and bad decisions.
Measuring outputs is easier than measuring outcomes. Outputs measure the things that a program directly consumes or produces: dollars spent, acres planted, billions served. Outcomes, on the other hand, describe impacts or effects, and they’re much harder to measure because they can’t be easily read from a budget, balance sheet or income statement. A program can efficiently deliver outputs without producing the outcome you desire. Let’s assume that our desired outcome is healthier children. We can upgrade the equipment on a playground and see more children playing on it. Do we know if the children are healthier? If they are healthier, how certain can we be that our investment in playground equipment had anything to do with it? Causality – attaching outcomes to outputs – is complicated.
Your stakeholders and partners may understand the policy environment as well as you do. Be deliberative. Be critical. Listen. Learn from them.
Ask the right questions. Good reporting and good decisions require good data. It can be tempting to generate a number – any number – to fill in the blanks of a report that doesn’t feel important to your mission. When you work with poorly chosen indicators it’s easy to lose sight of the importance of the work you do. Resist the temptation to choose indicators because they are easy to measure or to represent graphically. Choose indicators that tell you what you need to know.
There is nothing magical or easy about asking the right questions and using the answers to make better decisions. It’s hard work, and it’s the only way to get the job done.
To make Program Measurement and Evaluation count, remember:
1. Get it right the first time.
Decision-makers often treat programs as bright shiny objects with miraculous powers. They’re not. A program is a tool, like a hammer or a wrench. It’s designed to do a job. A small change in the job we want a program to do may call for a slightly different hammer. A bigger change in the job we want a tool to do may call for a different tool altogether. Effective PME considers the tool against the requirement, and recognizes that the cost of choosing the wrong tools and the wrong approach can be huge.
2. This will not be the last time.
The next round of PME will almost certainly be very different from the last. You probably won’t get it right if you use the same tools and the same tools that worked last time. Priorities change. Technologies change. The economy changes. Everything changes. You must change too.
3. Policy makers work in a fog.
Policy makers don’t work in complete darkness, but the best they can hope for is an imperfect view of the future and of the impact of their programs and policies on it. Program decisions are often – and by necessity – made with incomplete data, untested analysis and unclear focus. It is amazing – and a tribute to policy-makers – that evidence-based decision-making, problem-solving and policy development can be done at all in this environment of uncertainty.
4. Embrace answers that do not please you.
PME is not a search for things being done well. That’s the grade-school approach to evaluation. Every flaw PME finds in the design, development, implementation and delivery of a program is an opportunity to make it better.
5. PME is not rocket science.
Like the programs it evaluates, PME is context-sensitive. Performance Management Consultants created Performance Measurement and Evaluation to help people working within programs understand the PME results from their programs. You need to understand the norms of PME so you can tailor PME your framework. Own it. Nurture it. Create a PME framework that works for you. Your performance measurement and evaluation framework should be whatever works for you and your team.