Skip to content

Generative AI is having a moment and governments need to pay attention

7 February 2023

News and media


Image of a robotic hand pressing a keyboard on a laptop

This article by Annan Boag, assistant commissioner, privacy and assurance, at the Office of the Victorian Information Commissioner, is based on work he did as part of the Work-Based Project component of the Executive Master of Public Administration in 2022. It first appeared in the Mandarin and is reprinted with their permission. 

Like many others, I have been amazed by the text and art people are sharing online, made with nothing but a few words typed into an artificial intelligence (AI) generative deep-learning model (and, some say, the uncredited work of millions of creators). 

Working in privacy and technology regulation, I take a professional interest, too. So, when the latest tools became publicly available in December, I had to try them. 

I asked an AI to make a rhyme about the law I work with, the Victorian Information Privacy Principles (IPPs). It’s at the end of this article and I will let you judge its quality. 

But first I want to talk about the use of AI in government, something I spent much of 2022 thinking about.  

Governments are asking how they can – and if they should – use AI

Generative AI is having a moment. ChatGPT lets you ask a bot to do anything text-based that you can imagine. Draft a contract; write a poem; be a ‘choose your own adventure’ book. DALL·E 2 will draw whatever you describe. A robot’s self-portrait; Frida Kahlo having tea and scones; an avocado-shaped armchair. 

These things once took a lot of time. Human time. Now they are on the cusp of being automated. It’s natural to ask what the systems that facilitate this can do. And what should they be allowed to do.  

In 2022, I did a year-long group research project on AI ethics as part of an ANZSOG Executive Master of Public Administration.  I interviewed more than 20 people across Australian governments who had implemented AI systems, asking how they considered ethics when doing so. At one extreme, some people used AI in their jobs after doing a short course in machine learning, to try out their new skills. They did amazing things and soon their whole team was using a system that helped them work better and faster. 

How did they know the protections they had built in to address security, privacy, and bias were right? They were confident this could be addressed easily: because we’re developers, we know how everything works under the hood. It’s not a big problem. 

We spoke to others who applied a lot of process to ethics including privacy impact and human rights assessments, ethics committee approvals, and risk control plans. But these assessments sometimes happened after development was underway. Ethical processes were “stage gates”, one interviewee said. Many interviewees had thought about ethics deeply – and importantly, saw it aligned with the missions and values of their agency, and of public service. 

“We’re an organisation that is here to support people … we wanted to make sure we were doing that in an appropriate way,” said one interviewee. 

Another said: “We took the simple approaches, if we’re sitting there looking at a project, and we feel uncomfortable, if we take the AI part out, we’re still uncomfortable, we’ll take the data part out, are we still uncomfortable? It’s not how we’re doing it, it’s what’s being done”. 

Established procedures to assess the ethics of AI and automated decision-making – only NSW and New Zealand have these – gave people confidence to innovate. Some said the absence of a clear ethical framework and regulation was hampering innovation. Where people didn’t know how to consider and address a nascent ethical concern, they might not proceed with a technology that could otherwise deliver public value. In other cases, the lack of a framework or regulation was causing organisations to unknowingly take on ethical risks they hadn’t considered. 

And at the start of December last year, the discussion of AI went mainstream.  

Change is fast and accelerating – can government keep up?

Most people who have used the sort of systems I’ve described above will ask, what does this mean? What won’t be apparent to people wading in now is how quickly things are moving. 

Ten months ago, at the start of my project, our team used generative AI as a planning tool. The output was crude, and it took skill to write a prompt that gave a useful answer. Sometimes what looked good, was really nonsense. In November, the tool we were using released a new model. I asked it for “a list of questions to ask people working in large government departments, in an interview for a research project about AI ethics”. The questions (written in seconds) were better than the ones I wrote for my interview plan. 

Then at the start of December, ChatGPT’s public beta came out. It guided its millions of users to ask the right questions to get the output they want. It tries to tell them when it can’t help. I like to think I can still write and think better than an AI. But things are moving fast. 

It is tempting to pause and reflect. What can these technologies do? How can we use them? Should we use them? Governments shouldn’t sit on the sidelines because privacy, security, and ethical questions are hard to answer. Agencies need to understand and use these systems well to regulate them effectively and manage the changes they will bring. 

We need to use new technologies, safely and ethically, to learn how we can use them to better support and protect the communities we serve. The people I spoke with who were doing AI and ethics well were those who understood the technology; were connected and collaborative; and were guided by clear values. 

How do we build these characteristics in ourselves, our teams, and organisations? And how can we do that quickly, when every month these technologies look more likely to change the world? 

I’ll leave you with a final example of generative AI. I asked for a rhyme for each of the 10 information privacy principles in the Privacy and Data Protection Act 2014 (Vic). 

Will this technology replace lawyers (or poets)? Absolutely not. Yet. 


‘It’s IPP to be square’: an AI-generated poem 

IPP 1 – Collection 

An organisation must not collect information about you/ Unless it’s necessary for what they do/ They must collect it fairly and not in an intrusive way/ And tell you what they do with it, okay? 

IPP 2 – Use and disclosure 

An organisation must not use or share/ Information for a second affair/ Unless the individual’s consent it gets/ Or when public interests it protects 

IPP 3 – Data quality 

Data must be clear and true/ An org must make sure it’s not askew/ Correct and current, it’s never too late/ To ensure your data’s up to date. 

IPP 4 – Data security 

Data should be safe and sound/ Away from prying eyes around/ And once the data is not needed/ Destroying it should be heeded. 

IPP 5 – Openness 

Organisations must show their privacy plan/ And make it accessible if they can/ Inform the people what data they store/ It’s the law, they must keep score. 

IPP 6 – Access and correction 

IPP 6 is here to protect your rights/ It gives you access to your details and insight/ Knowledge is power, so be sure to inquire/ Access and correction are yours to acquire. 

IPP 7 – Unique identifiers 

Unique identifiers are not our way/ Unless it’s necessary to do our work today/ Assigning unique IDs to individuals/ Only when there’s a valid legal reason. 

IPP 8 – Anonymity 

Anonymity is the name of the game/ No one needs to know your name/ No need to give yourself away/ Your identity can stay out of play 

IPP 9 – Transborder data flows 

Transferring data to those outside of Vic/ Ensure the principles are upheld, that’s the trick/ Privacy must be kept, no matter where it’s sent/ Substantially similar laws, or consent 

IPP 10 – Sensitive data 

No matter the data, one must consent/ Should it be sensitive, this is the intent/ Unless it’s required by law and such/ The collection of sensitive information is not in touch