Public policy and the official evaluation: a symposium
Thu Sep 10 13:38:00 BST 2015
Time: 9.30am - 5.00pm
Venue: Chancellors' Building
Date: Thursday 10th September 2015
Speakers: Elliot Stern, (Editor, Evaluation The International Journal of Theory, Research and Practice), Alison Evans, (Head of Independant Commission for Aid Impact), Siobhan Campbell, (Head of Policy Evaluation at Department of Energy and Climate Change), Jonathan Breckon, (Head of the Alliance for Useful Evidence).
Public policy making is becoming more difficult. Whether the concern is climate change, international development or social security, policy making has become increasingly multi-layered, interconnected, fluid and controversial. New organisations at global, regional, national and local levels are engaging in public policy debate, decision-making and implementation. Established policy theories, institutions and mechanisms are being challenged. There are pressures for policy makers to be more flexible, transparent and adaptable; and at the same time to base policies on credible evidence of their relevance, impact, effectiveness and sustainability. Broadly defined, there is growing demand for policy evaluation.
We can all agree on the value of basing policy on evidence; the more difficult issue is how to sort, filter and combine the different sources of evidence available. Politics and the media generate information that demands an almost instant response, the danger being that policy becomes a succession of knee-jerk reactions. Academics and professional researchers produce a wealth of relevant evidence, but often fail to deliver what policy makers are seeking in a timely and appropriate form. Lying somewhere between these two extremes, there is the world of official policy evaluation. This can benefit from access to data, authority and an inside track. But these strengths can also result in studies falling awkwardly between the academic criteria of rigour and the need to be fleet of foot.
Given the above, generalising usefully about the future of public policy and evaluation is difficult when so much depends upon the specifics of particular policy issues and contexts. To allow for this the symposium will adopt the following structure. We will start by reviewing the options open to commissioners of official evaluations and the theory underpinning them. Three highly experienced speakers will then review how the role of officially commissioned evaluations is evolving in their own area of policy expertise. Parallel sessions - specific to each policy arena - will then provide an opportunity for all participants to explore what we can infer from each case about the changing role of official evaluations in public policy processes. A final plenary session will then compare and contrast insights gained into public policy generally, the role of official evaluations as a privileged source of evidence and alternative forms of evaluative practice.
|CB level 1|
Welcome - Hugh Lauder, IPR Director
Keynote Session One: Evaluative practice and theory. Elliot Stern.
Evaluation practice and evaluation systems: What we know and what they tell usAbstract below
Keynote Session two: International development, aid and evaluation. Alison Evans.
Serving two masters - balancing assurance and evaluation in the pursuit of development effectiveness
|CB Level 3|
Keynote Session three: Climate change, energy policy and evaluation. Siobhan Campbell
Rising to the challenges of conducting and using evaluation in government
Keynote session four: Social policy and evaluation. Jonathan Breckon
Finding the sweet spot: how to influence social policy and practice with evaluation and research
|12.30-1.30||Lunch||CB level 3|
Social policy: Chair: Emma Carmel. Discussants: Jonathan Breckon, Louise Brown, Paul Gregg, Jane Millar
International development and aid: Chair: Susan Johnson. Discussants: Louise Shaxson, Sarah White, James Copestake
Climate change and energy policy: Chair: Tim Mays. Discussants: Siobhan Campbell, Stephen Gough, Geoff Hammond
Tea and Coffee
|CB level 3|
Concluding plenary: Chair: James Copestake
Closing reflections: Hugh Lauder
Elliot Stern: Elliot Stern is Emeritus Professor of Evaluation Research at University of Lancaster and honorary fellow at University of Bristol. He edits the journal Evaluation: the international journal of theory, research and practice; and is a past President of the European Evaluation Society, the UK Evaluation Society and the International Organisation for Cooperation in Evaluation (IOCE). Elliot is currently interested in evaluation methodology; evaluation capacity building; and the boundaries between evaluation and the social science theory. He works as a consultant to the European Commission, the Department for Energy and Climate Change, Department for International Development, the United Nations Development Programme (UNDP), the OECD and various small community groups and NGOs.
Abstract: Evaluation which has burgeoned as a practice in governments internationally over the last 20 years takes different forms in different contexts. Evaluation practice - what evaluators do and how they do it – is embedded in systems and settings that use knowledge and ‘evidence’ in distinctive ways, shaped by history, politics, institutional forms and policy narratives. By considering practice in systemic terms we can begin to see more clearly how evaluation responds to wider policy concerns such as risk, uncertainty, accountability, transparency and trust.
Alison Evans: Dr Alison Evans – Chief Commissioner, Independent Commission for Aid Impact
Alison is a development economist with 30 years of experience in research, policy and evaluation. After gaining degrees from Sussex and Cambridge Universities, she began her career as an academic before joining the staff of the World Bank in 1994. There she worked in country economics, policy and evaluation and was part of the team that produced the World Development Report 1997 and the Bank’s first Annual Review of Development Effectiveness. Most recently she was Executive Director of ODI, the UK’s leading think tank in international development and humanitarian affairs. Alison specialises in issues of aid and development effectiveness and is a frequent adviser to senior decision makers across the bilateral and multilateral development system.
Abstract: The last decades have seen increased scrutiny of official development assistance (ODA) and the policy frameworks that govern its allocation, composition and, ultimately, its impact. This takes different forms across the sector from research-intensive impact evaluations to embedded institutional evaluations and independent audits of specific areas of aid spending.
The UK government established the Independent Commission for Aid Impact (ICAI) in 2010 to add an additional lens to the scrutiny of UK ODA. Dubbed the ‘Aid Watchdog’, ICAI is asked to provide assurance to the UK public, via the parliamentary select committee on international development, that UK aid resources are being spent effectively. At the same time the UK development community is looking to ICAI to bolster accountability for aid results but also to feed in lessons to policy makers about what works and what doesn’t in UK aid. Balancing the drive for more accountability (independence and assurance) with the drive for increased learning and uptake (real-time evaluation and engagement) is central to making the ICAI model work for the future.
Siobhan Campbell: Siobhan Campbell is chair of the Cross Government Evaluation Group, a cross-departmental and cross-disciplinary group which aims to improve the supply of, stimulate demand for, and encourage the use of, good quality evaluation in policy development and delivery in order to achieve better policy outcomes. In her department – the Department of Energy and Climate Change (DECC) – Siobhan is head of Policy Evaluation and head of Profession for Government Social Research (GSR). On evaluation, her work has been to ensure evaluation is built into DECC’s policy cycle, and to help prioritise, champion and quality assure evaluation work across the department, working both internally and with DECC’s contractor base to raise the expectations and standards of DECC’s evaluation work. Prior to joining DECC in May 2010, Siobhan was deputy head of the Government Social Research Unit based in HM Treasury, leading on joining up and coordinating the work of the different analytical professions and on the development of a series of initiatives to increase the professionalism of GSR. This included initiating revision of the Magenta Book, government’s guide to evaluation. Her career before this includes responsibility for criminal justice research within the Scottish Government, and working in a number of different research roles within the Home Office. She has a Ph.D. in psychology from the University of Glasgow.
Abstract: My talk comes from two perspectives, across-government one, as chair of the Cross Government Evaluation Group, and a department-specific one as Head of Evaluation in the Department of Energy and Climate Change (DECC). It will discuss some of the challenges of conducting and using evaluation in government, and some of the solutions we have been exploring to help us with this work.
The context for evaluation in government is austerity, and the resulting ‘tough choices’, where ministers and policy officials want evidence to help drive investment (and disinvestment) decisions, maximise efficiency, drive down costs, and achieve more with less. In this context we need to make sure evaluation delivers useful and usable findings that informs these immediate choices, but also supports our longer-term decision making. For example in the DECC context, evaluation should lay the evidence foundations that will help us develop the policies we need to take us to our 2050 decarbonisation targets. So in this context, the first challenge is maximising impact and focusing on evaluation as a learning activity.
However, there is a tension with the other big driver of evaluation: accountability. The NAO, in their 2013 report assessed departmental achievement based on impact and economic evaluation alone. Evidence for accountability has a greater need for independence, whereas a learning and impact focus values policy closeness. The challenge here is balancing the two, potentially conflicting, needs.
A third challenge is methodological. We need to develop methods to better meet the challenges we face. Policies (as opposed to interventions) are rarely simple and are often not (wholly or at all) amenable to experimentation. In DECC’s context our policies are innovative themselves, complex, emergent, overlapping, interacting. We need methods to cope with this uncertainty and lack of control, while also achieving understandable and usable outputs.
To overcome these challenges we have looked into changing practice and changing methods. On practice, we have embedded evaluation into our business as usual – it is a fundamental part of the policy making and delivery approach. There are strengthen levers across government – scrutiny from the Major Projects Authority, a focus on ‘benefits realisation’, the push from the NAO and HM Treasury. In DECC we have a framework contract so we can work with a group of evaluators who can help scope, steer and challenge our thinking, working with us throughout the life course of an evaluation. On practice, we have been exploring new methods,
On methods, we are trialling new approaches to evaluation, bringing in new thinking and ideas to our practice. But these are still immature, and we need to work with the academic community to develop these further.
Ultimately, we need to be a more joined-up community of practice, which share an ambition to upskills and improve our collective ability to improve the supply, demand and use of evaluation evidence in government decision making.
Jonathan Breckon: Jonathan Breckon is Head of the Alliance for Useful Evidence, an open–a ccess network of 2,000 individuals and champions the use of evidence in social policy and practice. He has 18 years’ experience in policy, research and public affairs. He joined Alliance for Useful Evidence from Arts and Humanities Research Council (AHRC) where he was Director of Policy and Public Affairs. He has also worked in policy and research at the Royal Geographical Society (with IBG), the British Academy, and Universities UK. He is on the boards of the Society for Evidence-Based Policing, and the Centre for Ageing Better 'what works' centre.
Abstract: The UK has seen the launch - or rebranding – of nine What Works centres in social policy over the last two years. Funded by the UK government, Economic and Social Research Council, the Big Lottery Fund and others, they cover areas such as wellbeing, local economic growth, policing, education and more. But these are not the only institutional changes. There are other important drives to bring evaluation closer to policy makers, such as Project Oracle in London, the UK Alliance for Useful Evidence or the recently completed Local Government Knowledge Navigators. This talk will discuss progress on these initiatives, and how to find a good balance – the sweet spot – between the ‘supply’ of evaluations by researchers and consultants, and the ‘demand’ side of policy makers, commissioners, charity leaders and frontline staff in public services.