fbpx
Skip to content

Implementing AI in the public sector

19 July 2023

Research

Share

Artificial Intelligence (AI) has advanced as one of the most significant technological innovations in the public sector which brings risks as well as opportunities. Governments are investing in AI to explore the possibilities of replacing human decision-making through automated decision-making, or at least with decision preparation. The recent ‘Robodebt’ Royal Commission has shown the dangers of taking this approach too far, too quickly. An article in Public Management Review examines the implementation approaches and challenges facing public managers when using AI.

Defining AI

The article cites the European Commission’s definition of AI as ‘systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy- to achieve specific goals’. It identifies two dimensions to the technology:

  • the thought process and reasoning dimension
  • the behavioural dimension of humans that are then carried out by machines.

AI models cognitive thinking to predict human behaviour through tools such as natural language processing, automated reasoning, machine learning, computer visioning or robotics. Different approaches to AI include:

  • machines that think like humans
  • machines that act like humans
  • machines that think rationally
  • machines that act rationally.

Benefits and challenges for the public sector

By automating services, AI has the potential to free up human capacity that can then be applied to complex decision-making processes. Applications of AI can also proactively suggest service delivery to those citizens that are selected by an algorithm. A key issue is how AI will augment the skills and capabilities of humans and how humans and machines will work alongside one another.

Implementation of AI in public organisations is not without its challenges. These include data privacy and security, the cost of implementation and ethical risk. AI may pose risks to individuals, organisations and society by altering core public values such as transparency, fairness and equity. AI can result in undesirable and ethically problematic consequences such as biases and discrimination in public service delivery.

Ethical challenges of AI

The ethical challenges of AI fall into three groups. The first is about how AI systems use data to produce a particular outcome or conclusion. Incomplete data may lead to wrong conclusions or misleading results. Data may not only be incomplete but also biased, therefore resulting in unfair decisions. AI uses machine learning to continuously improve decision-making models based on the available data. These models are complex and can yield counterintuitive results. This often results in black boxes that reduce public servants’ ability to explain and justify decisions.

The second group of ethical challenges includes concerns regarding the ethical impact of AI-driven actions and decisions such as unfair outcomes and unintended consequences. The final group of challengers are issues of accountability and responsibility. AI may create situations where a specific individual or team cannot be held directly responsible for the decisions made.

The range of automation may differ across public services. This can range from decision support systems which allow public servants to adjust or override the decisions suggested by the system to full automation. The latter introduces opaqueness to the decision-making process and blurs responsibility for the outcomes.

AI in the public sector

There are three different dimensions to the implementation of AI in public organisations:

1. Technology dimension

This takes the form of experimentation in AI laboratories that are set up to test new tools and algorithms. These are driven by the availability of the technology rather than set up by using a problem-centric approach. A purely technology driven approach might also lead to relatively low acceptance of the technological solution.

2. Decision-making dimension

The automation of decision-making promises to increase efficiency and effectiveness by making decisions faster and learning from past decision-making procedures. It highlights a shift in power dynamics given deeply held assumptions concerning whose experience or expertise should be applicable in decision-making.

Is it the algorithm’s authority to make presumably more rational decisions based on more data or the public manager’s authority that is based on human experiences and context information on a case-by-case basis?

3. Organisational transformation dimension

This dimension focuses on the promise of AI to replace the public sector workforce with automated practices and decision-making as part of modernisation efforts and the digital transformation of the public sector. While AI may replace human decision-making, there are human resources challenges on how to retrain or re-employ the existing workforce to conduct meaningful work and focus on individual complex cases.

The bottom line

It is still early days for AI-based digital transformation in public administration. AI technology is quickly progressing in its development. It is likely that the connection between humans and machines will develop in new ways. The borderline between humans and machines may become less distinct in communication or physical products related to services. The use of AI for data-based decision-making also means dealing with the question of ethics and safety of data use.

Want to read more? 

Implementing AI in the public sector – Ines Mergel, Helen Dickinson, Jari Stenvall and Mila Gasco, Public Management Review, July 2023

The article is available via individual or institutional access through a library service such as a university library, state library or government library.

Each fortnight The Bridge summarises a piece of academic research relevant to public sector managers.

 

Sign up to The Bridge

The Bridge banner
Recent Research Briefs on the use of technology in public administration include:
Published Date: 19 July 2023