Does it do no harm? How do we know?

In an ideal world, this post would just be a sub-category of the previous one: Is it based on evidence? But, of course, the world is far from ideal.

Common sense says that basing our plans on evidence ought to always include periodically reviewing and assessing what worked well, what could be improved on, and what went wrong.  Feeding that information back into a management system allows for a process of continuous improvement, or, at least, continually decreasing harm.

In the private sector the concepts of monitoring and evaluation are almost self explanatory. Manufacturers build stuff and sell things. So they need to keep track of what they buy, what they build and what they sell. And more importantly, how much each element cost and how much money is left over at the end.

If a company is spending spending money on an advertising promotion, then sales will be carefully tracked before, during and after the advertising period. The figures will be carefully analysed in order to be sure the increased sales were indeed a result of the advertising and not just due to holiday season, or shops selling off old stock cheaply.

In the public and non-governmental sector, unsurprisingly, this process is a bit more complicated; and not always done well. However, it is a critical component of doing no harm. Without stopping to check, review and analyse, how will we even know we are doing harm? This process is often known as monitoring and evaluation.

Monitoring can be thought of as a management task: keeping track of who does what and when. How many participants took part in the workshop? (Broken down by gender, of course.) How many people received one to one counseling and an HIV/AIDS fact sheet? How many clean injecting kits were given out at the needle exchange?

Evaluation, on the other hand, is typically only done periodically. Perhaps every year, or once every several years. During an evaluation the data gathered by routine monitoring is analysed along with additional information, including data gathered or commissioned by the evaluators themselves.

The evaluation exercise is about trying to assess whether “what we did” came close to achieving our intended goals. Was there a reduction in needle transmitted HIV infections over this period? And if so, was it due to the needle exchange efforts, or was it due to some unrelated factor like people changing to smoking rather than injecting?

Key aspects of an effective monitoring and evaluation strategy include defining clearly at the outset what the program or policy hopes to achieve and how the results will be measured. (What will be the indicators of success?) Another important element involves doing some research or assessment before the project begins, gathering what is called “baseline” data. Repeating this periodically during the project or after it ends should, hopefully, demonstrate a significant improvement in the key issues or indicators.

Monitoring and Evaluation is sometimes done badly. This could could involve bolting a tokenistic evaluation exercise onto the end of a program almost like an afterthought, with a lack of clear data to work with, or a restricted mandate to investigate. In the worst cases, it is done internally or by a sympathetic friend who can be trusted not to be too critical.

There are many tools and frameworks that can be used to evaluate polices and programs. One of these, Impact assessments (IAs), are increasingly significant among European governments.

Impact Assessments attempt to go beyond the simple indicators of a project, and look at the long term direct and indirect impacts of policies and programs. Importantly, they consider the positive and negative outcomes, including both intended and unintended impacts.

The IA framework has been adapted to various specific contexts including social impact assessments, health and environmental impact assessments, gender impact assessments and human rights impact assessments. Impact assessments are also of interest to some in the business sector.

Of course, no evaluation methodology is perfect. And as with evidence-based policy making, fallible humans are involved at each stage of the process. Critics who argue “evaluation is political because we set the agenda with the measures we use“, make a valid point. But this is not an excuse to ignore the process nor try our best to overcome our limitations.

It is logical therefore, that we should evaluate, as well as implement, to the best of our abilities. Here’s a quick checklist, that argues the best evaluations:

  • are done by a fully independent body (self-evaluation is better than no evaluation, but not much)
  • are themselves adequately funded (good research costs money)
  • are made public for the purposes of transparency (especially in relation to use of public money)
  • use an appropriate methodological framework (which is also published for transparency)
  • include assessments based on human rights
  • include inputs from key populations affected directly and indirectly by the policy/program
  • include assessment of unintended as well as intended outcomes: both good and bad
  • make some assessment of efficiency / value for money
  • include assessment of the long term / sustainability (many projects are great for a few years, then funding finishes and impacts disappear)
  • where possible, include assessment in relation to control groups or areas without interventions

So does it work? Does it do no harm? And how do we know? Our knowledge is never perfect, but there are no excuses these days for not having a pretty good idea.

___

Further reading:

The UK government’s Magenta Book contains technical guidance for people who commission and implement evaluations of government policy. It’s also a handy reference, or a crash course, in many research methods. Other books also exist.

Advertisements