Measurement and Evaluation in Public Diplomacy

    At the ISA Public Diplomacy Working Group meeting in Montreal one of the topics of discussion was the state of evaluation and measurement of PD efforts.  Neither the academic community nor PD organizations are very happy with the state of evaluation.  This is obviously an important issue  because the ability to demonstrate impact is important for justifying PD budgets.

    The discussion took in evaluation at programme level (for instance surveys of exchange participants ) and national level measures such as Simon Anholt’s Nation Brand Index.  The point was made that commercial indexes such as the Nation Brands index are widely followed in Foreign Ministries – for instance look at the London Olympics Public Diplomacy Plan that makes explicit mention of the index.  At the programme level MFAs are increasing the provision in budgets for PD activities for evaluation of the impact of the activity.

    In listening to the discussion I got the feeling that there is a quite a big gap between these two levels of evaluation – a gap that reflects problems with the way that we understand the impact of public diplomacy activities.  At the macro level changes in national reputation may be unconnected with PD activities – if your country attracts attention because of a disaster, a sporting event, or a policy change.  At the micro level it is relatively easy to survey or interview participants in a programme.  But it is much harder to find the connection between the programme activities and the overall national perception.  Part of the difficulty is that the impact of a PD programme may actually be much more complex and reach far beyond participants who can be easily identified and monitored..  For instance to take the example of the US assistance to Brazilian public education that I blogged about a few weeks ago there are at least four sets of impacts.

  1. Effects on people who directly participate in the programme
  2. The impact of programme participants on their contacts
  3. Impact from media coverage of the programme
  4. Impact on Brazilian governmental actors of the willingness to undertake the programme.
  5. My thought is that part of the solution  to questions of evaluation is actually conceptual;  that is the need to think through what PD programmes are trying to do rather than rely on standardized measurement techniques or as problem of research method.

    Before the PD community rushes off to invent new measurement strategies it’s worth looking at how other people handle these challenges.  I’ve recently become aware of the volume of work done in the International Development area  on these topics.  A big part of current development work is influencing policy and faces similar problems to PD activity.  Because so much of development work is done on a project basis with other people’s money development organizations are under pressure to demonstrate that they are getting good value.  Also there is a lot more money in development work than PD so they’ve had the resources to think quite extensively about these issues (not to say that they’ve actually solved them).

    The basic approach is to design evaluation activities alongside the development of the programme. A programme should lay out a ‘theory of change’ that is the assumptions about how the intervention will have its effect.   This theory is often expressed in the form of a ‘logic model’  a flow diagram that lays out inputs>activities>outcomes>impacts .  Explicitly laying out the theory of change does two things.  Firstly, it allows the planners to make assumptions explicit and to evaluate their plausibility.  Secondly, this process allows the identification of possible measurement and evaluation points – particularly at interim stages of the intervention.  Laying out the theory is important because it reduces the temptation to use measures that might be easy but not directly helpful in measuring progress towards the objective

    Over the next few weeks I’m going to spend some time looking at some of this literature because I think that it casts light both on the question of measurement and evaluation and of the range of influence mechanisms that can be deployed.

    Here’s a very useful introduction to work in the area that includes a list of useful resources.

    Jones, H. (2011) A Guide to Monitoring and Evaluating Policy Influence. London: Overseas Development Institute. Available here

2 thoughts on “Measurement and Evaluation in Public Diplomacy”

  1. Hello.

    Perhaps the following, picked up in one of my favorite graduate school classes (so compliments to Prof. Jennifer Brinkerhoff), will be of help.

    The logical framework or logframe is a useful evaluation process, but scholars and practitioners have debated it for decades because it requires participation and commitment of all key stakeholders from the project/program planning stage, throughout implementation, and of course at the end stage. The logframe process is iterative, and not linear, and its strength is in its feedback processes. To be effective, it needs to not just be initiated by planners and policy makers and then imposed upon implementors/practitioners. All need to be involved from the inception of a project. International development and PD projects often/usually involve stakeholders on different continents, and that is what a large part of the controversy over the logframe is around. A major challenge is getting all the stakeholders to stay committed to and engaged in the iterative, formative nature of this evaluation process. It’s different than standard summative evaluation processes. The logframe is really a comprehensive planning, design, and evaluation approach by which donors and beneficiaries can tease out goals, implementation, and assumptions behind a project’s theory of change. One of the logframe’s original purposes is to boost participation of development beneficiaries, but it usually ends up not involving beneficiaries because donors have already decided what a project’s objectives are. Some literature on the logframe and the debate around it: Gasper, Des. “Evaluating the ‘Logical Framework Approach’ Towards Learning-Oriented Development Evaluation.” Public Administration and Development, Vol. 20, No. 1 (2000): 17-28; Bell, Simon. “Logical Frameworks, Aristotle and Soft Systems: A Note on the Origins, Values and Uses of Logical Frameworks, in Reply to Gasper.” Public Administration and Development, Vol. 20, No. 1 (2000): 29-31. Smith, Peter. “A Comment on the Limitations of the Logical Framework Method, in reply to Gasper, and to Bell.” Public Administration and Development, Vol. 20, No. 3 (2000): 439-441.

    I think it would be a great idea to use the logframe process in PD, if done right. That’s often the problem with evaluation, the old garbage in-garbage out dilemma.

    Cheers,
    Debbie Trent

  2. Thanks Debbie I’ll follow up on the references – even in the little bit of reading I’ve done in this area I’m picking up on the frustrations with the logframe model.

Leave a comment