Why successful collaboration needs to start with the right question
27 February 2020
● News and mediaBy Subho Banerjee, Research Program Director, ANZSOG
The following piece is an edited version of an article published by the Public Policy and Societal Impact Hub at the Australian National University. The Hub is responsible for maximising the contribution the University makes to societal wellbeing and the development of effective public policy.
Successful collaboration between practitioners and academics can play a critical role in strengthening the evidence base for public policy decision-making. Yet it remains a rarity to see it done in a manner which is genuinely satisfying for both sides, and which generates real impact in the public policy process.
Public policy practitioners are required to provide decision-making advice on complex, difficult policy problems across a wide range of subject domains, often with limited information and under significant time pressure. Academics have deep specialist expertise, honed over many years of rigorous professional practice and contest, and are often seeking opportunities to apply that expertise in real-world problem-solving. There should be significant mutual benefit in making the match happen across the boundary, but profound differences in incentives, language and culture mean that, all too often, collaborating across the boundary remains a fraught process.
These difficulties can be overcome, but it requires specific attention to the techniques required to work successfully across the boundary.
Getting the questions right
For project work, getting the question right is foundational to the success of the collaboration. After all, as Ursula Le Guin said, ‘there are no right answers to wrong questions ’. But working out the right question is difficult, and requires the commitment of time and resources to work through the conceptual issues, and to ensure that everyone involved has the chance to come to a common understanding.
When expressing concern about collaboration, practitioners are prone to complain about a lack of practicality in academic project work. But they rarely reflect on how much of this might be driven by how they formulated the question in the first place: How well was the context spelt out? Was it made clear which constraints are really binding in the real world in this particular public policy problem? What is the relative prioritisation within the question set – what is essential, and what is optional? If this information is not set out in the framing of the question, it is hardly surprising that the academic then applies different constraints, or chooses a different prioritisation – which may reduce the practical relevance of the work.
How closely was the form of the output specified? Was it made clear what level of detail would be required for implementation? This is always likely to be contested space – a delicate judgement call between scope, budget, and duration of the work. But again, if there is no agreement up-front about the target, it is hardly surprising that the final product may not hit the mark.
And, of course, there are corresponding complaints from academics as well. Most commonly, the complaints are about poor technical specification in the question – language being used too loosely (therefore appearing ambiguous or unclear), or the scope being poorly defined (hence unrealistic about what can be delivered for the given timeframe). But occasionally the complaints are that the question is too prescriptive – either precluding a deeper exploration of the underlying drivers of an issue, or being written specifically to preconfigure a particular answer.
Loose question setting can thus explain much of the frustration felt on both sides of the practitioner/academic collaboration. But writing good questions for public policymaking is a matter of considerable tradecraft – to get the right balance on issues of scope, detail, prioritisation and approach, and to do so in language that is comprehensible enough for generalists, but accurate enough for specialists.
And it is tradecraft with a serious epistemological component – you need to think carefully about the knowledge structures that apply in question setting.
My academic training was initially in experimental physics, rather than public policy. In the natural sciences, the external world is conceptualised as an objective fact – it exists in the same manner regardless of how we choose to think about it. But in fact, what we perceive of the external world always depends to some extent on how we look – as noted by Werner Heisenberg, ‘what we observe is not nature itself, but nature exposed to our method of questioning’.
It is thus important to recognise that the act of setting the question does indeed have consequences for how the problem is likely to be approached. The framing and language used in the question implicitly or explicitly presuppose a way of thinking about the problem – so great care must be taken to ensure that it leaves open enough space to think creatively about the answer.
Finding a ‘middle space’ for questions
In experimental physics, there are some questions that, once you have specified them correctly, are considered ‘trivial’ – either because the answer to the question can be derived unequivocally from theory, or because the proposed experiment is actually just a remapped version of an experiment that has been done in a different form.
And then there are some questions that are more or less impossible to answer within current constraints of available resources or technical experimental parameters. These questions might well be very interesting to keep in mind for the future, and may in time be genuinely ground-breaking – but they are not realistic for current project work.
So the constructive space is in the middle, to find a tractable, manageable form of problem definition – something that can be done with available approaches and resources, but that holds the promise of something novel, and not wholly knowable at the start.
The degree of detail really matters to define this middle space. Another physics aphorism is that ‘everything should be made as simple as possible, but not simpler’– that is, explanations need to draw out underlying drivers, but in a manner that doesn’t assume away the intricacy of real-world experience.
This poses an acute problem in public policy problem definition – how can practitioners represent the detail of experience in a manner which enriches the investigation, but does not make the analysis impossible?
In public policy, as in physics, the real richness of insight comes from a thinking process that allows an iteration between theory and empirical experience. Wherever possible, potential hypotheses should be tested against experience to determine their validity, and refined as new experience is brought to bear.
And this iteration needs to be played out in micro in the question setting itself – which requires time and resource commitment, in proportion to the overall scale of the project. It requires domain knowledge and expertise, combined with a high degree of openness and goodwill.
These are messy, difficult issues, which can be hard to codify, and hence are often best dealt with through a degree of informal conversation. So it is unrealistic to think that they can be done in a single-pass – they require working through in a structured, iterative process.
A practical example: ANZSOG’s work for the Thodey Review
As an example, the Australia and New Zealand School of Government (ANZSOG) sought to use a more detailed, intensive question-setting process in producing a series of six papers on key public administration issues for the Independent Review of the Australia Public Service (APS) (also known as the Thodey review). The papers were written by a selection of senior academics and practitioners, through a process brokered by ANZSOG.
The final papers influenced both the interim and final reports from the Review, and were positively acknowledged by both. They covered a wide range of topics, including public integrity, public governance, service delivery reform, interjurisdictional challenges, commissioning and contracting, and improving the use of evidence and evaluation.
Each was in the order of 20-30 pages, and responded to a specific public policy/administration question, expressed as a problem statement of approximately two pages. The overall collaboration ran over approximately nine months in 2018-19, with individual component papers being commissioned and written in parallel at different stages in the process. Three of the six papers were done in two phases – an initial literature review and state of play, followed by a more detailed exploration of policy options (two phases of about six weeks each). The other three papers were done in a single phase, covering the same range of interest, but in a compressed time frame (about eight weeks each).
The secretariat to the Independent Review, on behalf of the Review Panel, generated initial problem statements in each of these areas. But these problem statements were explicitly presented as a ‘first pass’ – to set out what they thought they wanted, but in due recognition that they were not technical experts in the domains to be covered. ANZSOG then facilitated a structured feedback process between the secretariat and the selected authors, through which the problem definition was refined. The question-setting process took a full week, even though the papers themselves were required to be provided with a tight six to eight weeks of part-time writing time for each phase. And for the papers which went through two phases, the initial literature review stage itself was invaluable preparation for the development of much better targeted questions for the policy options phase.
In each case, the wording of the final question was refined to enhance technical accuracy, to ensure that the work would be well-targeted to the Panel’s priorities and to adjust the scope to ensure that the papers could be delivered to a high standard within the stringent time deadline. For example, with regard to public integrity, the secretariat was able to request specifically that the authors start by setting out the broader conceptual and philosophical basis for developing a positive integrity culture in the APS, rather than narrowing too quickly to specific questions of institutional design.
The feedback from this process has been very positive. The authors felt far more directly involved in the commissioning process than usual – indeed, on one memorable occasion, one of the authors started complaining about certain phrasing in the question later on, before remembering that he had put it in himself originally! And the secretariat commenced the process with a much higher degree of confidence that the eventual output would be well-targeted to the areas of particular priority for the Review Panel. ANZSOG was able to use the refined questions as the main reference point through the process itself – to work with the authors to ensure then that these questions were directly answered in the final product, in a practical, experience-informed formulation that would assist the Panel directly in their deliberations.
ANZSOG is seeking to institute a similar process in future commissioning exercises. We are looking to specify a period for working through the question intensively at the start of commissioned work. For smaller pieces, this might be done informally through discussion or email, but for larger pieces, this might involve specific workshops, which may also draw in other external expertise. We need to work through the language, structure and prioritisation of the question in detail, to settle scope and prioritisation as explicitly as possible. The aim is to work out a question that it is possible to answer in an interesting manner, at a useful level of detail, within given resource and time constraints. The issues need to be worked through iteratively and carefully, and the process may often need to continue even once the project is underway – as understanding grows on both sides about how to best frame the problem.
Of course, more attention to question-setting doesn’t guarantee success in and of itself. The work itself still needs to be of a high quality, and actually answer the question being asked. But such a process provides a far more solid platform for successful collaboration, through paying due respect to the epistemological complexity inherent in the question setting challenge.
Find out more about ANZSOG’s research program on the ANZSOG website.