By Professor Paul Cairney
Paul Cairney is Professor of Politics and Public Policy in the Division of History, Heritage, and Politics at the University of Stirling in Scotland. He is a specialist in British politics and public policy, often focusing on the ways in which policy studies can explain the use of evidence in politics and policy. This article first appeared on his Politics & Public Policy blog.
The first thing we learn when we study public policy is that no-one is quite sure how to define it. Instead, introductory texts focus on our inability to provide something definitive. That is OK if we want to pretend to be relaxed about life’s complexities, but not if we want to measure policy change in a reasonably precise way. How can we measure change in something if we don’t know what it is?
A partial solution is to identify and measure types of public policy. For example, we might treat policy as the collection of a large number of policy instruments or decisions, including:
Public expenditure. This includes deciding how to tax, how much money to raise, on which policy areas (crime, health, education) to spend and the balance between current (e.g. the wages of doctors) and capital (building a new hospital) spending.
Economic penalties, such as taxation on the sale of certain products, or charges to use services.
Economic incentives, such as subsidies to farmers or tax expenditure on certain spending (giving to charity, buying services such as health insurance).
Linking government-controlled benefits to behaviour (e.g. seeking work to qualify for unemployment benefits) or a means test.
The use of formal regulations or legislation to control behaviour.
Voluntary regulations, such as agreements between governments and other actors such as unions and business.
Linking the provision of public services to behaviour (e.g. restricting the ability of smokers to foster children).
Legal penalties, such as when the courts approve restrictions on, or economic sanctions against, organisations.
Public education and advertising to highlight the risks to certain behaviours.
Providing services and resources to help change behaviour.
Providing resources to tackle illegal behaviour.
Funding organisations to influence public, media and government attitudes.
Funding scientific research or advisory committee work.
Organisational change, such as the establishment of a new unit within a government department or a reform of local government structures.
Providing services directly or via non-governmental organisations.
Providing a single service or setting up quasi-markets.
I say ‘partial solution’ because this approach throws up a major practical problem: we do not have the ability to track and characterise all of these instruments in a satisfactory or holistic way. Rather, we have to make choices about what information to use (and, by extension, what to ignore) to build up a partial, biased, picture of what is going on. Here are some of the practical problems we face:
Depth versus breadth
Should I focus on one policy instrument or all of them (or some combination)? Should I focus on a single key event or a picture of change over decades? Should I focus on the outputs of one policymaking organisation or them all, or try to track the outcomes of the system as a whole? In each case there is a major trade-off: if we ‘zoom in’ we might miss broad or long term trends; if we ‘zoom out’ we might miss important details.
Our empirical and normative expectations
When we identify policy change we link it, explicitly or implicitly, to a yardstick based on how much we expect it to change (based on, for example, the abilities of people to initiate or block change) and how much we think policy should change under the circumstances, given the size of problem or the level of public attention. Our normative expectations are difficult to separate from the empirical. Think of cases such as air pollution, environmental policy, tobacco, alcohol and drugs control, violent crime, poverty, and inequality. In each case, we have expectations about what should happen based on how important we believe the problem to be – and may often identify minor or moderate change, based on that perception, rather than compared with (say) change in other areas.
Policy change looks very different from the ‘top’ or the ‘bottom’. For example, a focus on policy choices by central governments may exaggerate change compared to long term outcomes at the ‘street level’. Indeed, it is tempting to focus on rapid, exciting changes at the top, without thinking through their long term consequences. Or, we may find reformers at the top, frustrated with a lack of progress, compared with local actors frustrated with the effects of the rapid pace of change on their organisations.
In each case, we have to think about why a policy decision was made: what problem was it designed to solve? For example, a tax or economic sanction can be used to influence behaviour or simply to raise revenue (think, for example, of ‘sin’ taxes). Policymakers can introduce measures to satisfy a particular interest or constituency, ensure a boost to their popularity or fulfil a long-term commitment based on fundamental beliefs. The distinction is crucial if the long term political weight behind a measure determines its success.
When we consider the use of economic measures, we need an appropriate context in which to consider its significance. We may describe spending on an issue as a proportion of GDP, a proportion of the government budget, a proportion of the policy area’s budget, and in terms of change from last year or over many years. In some cases, the amount of money spent or raised by government could be compared with that spent by industry, such as when a health education budget is dwarfed by tobacco/ alcohol advertising, or a huge company receives a small fine for environmental or competition law breaches.
Policy as a whole may seem inconsistent, either within a single field (e.g. some governments control tobacco use but also subsidise leaf growing and encourage trade) or across government (e.g. school expulsion policies may exacerbate youth crime).
Given these problems of inevitable bias, I suggest that we consider the extent to which our findings can be interpreted in different, and equally plausible, ways.