- At the ISA Public Diplomacy Working Group meeting in Montreal one of the topics of discussion was the state of evaluation and measurement of PD efforts. Neither the academic community nor PD organizations are very happy with the state of evaluation. This is obviously an important issue because the ability to demonstrate impact is important for justifying PD budgets.
- Effects on people who directly participate in the programme
- The impact of programme participants on their contacts
- Impact from media coverage of the programme
- Impact on Brazilian governmental actors of the willingness to undertake the programme.
The discussion took in evaluation at programme level (for instance surveys of exchange participants ) and national level measures such as Simon Anholt’s Nation Brand Index. The point was made that commercial indexes such as the Nation Brands index are widely followed in Foreign Ministries – for instance look at the London Olympics Public Diplomacy Plan that makes explicit mention of the index. At the programme level MFAs are increasing the provision in budgets for PD activities for evaluation of the impact of the activity.
In listening to the discussion I got the feeling that there is a quite a big gap between these two levels of evaluation – a gap that reflects problems with the way that we understand the impact of public diplomacy activities. At the macro level changes in national reputation may be unconnected with PD activities – if your country attracts attention because of a disaster, a sporting event, or a policy change. At the micro level it is relatively easy to survey or interview participants in a programme. But it is much harder to find the connection between the programme activities and the overall national perception. Part of the difficulty is that the impact of a PD programme may actually be much more complex and reach far beyond participants who can be easily identified and monitored.. For instance to take the example of the US assistance to Brazilian public education that I blogged about a few weeks ago there are at least four sets of impacts.
My thought is that part of the solution to questions of evaluation is actually conceptual; that is the need to think through what PD programmes are trying to do rather than rely on standardized measurement techniques or as problem of research method.
Before the PD community rushes off to invent new measurement strategies it’s worth looking at how other people handle these challenges. I’ve recently become aware of the volume of work done in the International Development area on these topics. A big part of current development work is influencing policy and faces similar problems to PD activity. Because so much of development work is done on a project basis with other people’s money development organizations are under pressure to demonstrate that they are getting good value. Also there is a lot more money in development work than PD so they’ve had the resources to think quite extensively about these issues (not to say that they’ve actually solved them).
The basic approach is to design evaluation activities alongside the development of the programme. A programme should lay out a ‘theory of change’ that is the assumptions about how the intervention will have its effect. This theory is often expressed in the form of a ‘logic model’ a flow diagram that lays out inputs>activities>outcomes>impacts . Explicitly laying out the theory of change does two things. Firstly, it allows the planners to make assumptions explicit and to evaluate their plausibility. Secondly, this process allows the identification of possible measurement and evaluation points – particularly at interim stages of the intervention. Laying out the theory is important because it reduces the temptation to use measures that might be easy but not directly helpful in measuring progress towards the objective
Over the next few weeks I’m going to spend some time looking at some of this literature because I think that it casts light both on the question of measurement and evaluation and of the range of influence mechanisms that can be deployed.
Here’s a very useful introduction to work in the area that includes a list of useful resources.
Jones, H. (2011) A Guide to Monitoring and Evaluating Policy Influence. London: Overseas Development Institute. Available here