fbpx
Skip to content

The impact of AI on regulators and what it means for our sovereignty

7 August 2024

News and media

Share

By Dr Kate Conroy

Queensland Government Customer and Digital Group

Using AI to better communicate regulatory obligations

AI, particularly large language models (LLMs), can be beneficial as tools for exploring complex subjects, enhancing understanding and facilitating conversations across many knowledge domains and stakeholders. For instance, regulators can develop detailed prompts including constraints to explore legislation relevant to a specific context or to explore potential interactions between different policy, frameworks and obligations. A regulator could ask an LLM to make content easier to understand by specifying a Flesch-Kincaid Grade Level for diversity and inclusion or coming up with a tailored and intuitive case study. By using LLMs as a research and communication tool, regulators can significantly improve their knowledge and skills, leading to more informed and effective conversations. However, regulators need to be careful when using AI for administrative decisions with legal obligations and consequences—see the Commonwealth Ombudsman’s Automated Decision Making Best Practice Guide.

The need for good data that reflects Australia

As a middle-power nation, Australia relies on AI tools developed and trained on data that may not reflect Australian values, ambition or narratives. This raises critical questions about national sovereignty and the importance of preserving our cultural heritage. Historically, Australia has invested in cultural institutions like the ABC, Screen Australia and the National Archives to safeguard our stories and traditions. In the digital age, it is crucial to improve the representation of diverse and comprehensive Australian data in AI systems.

Investing in better data, data governance frameworks, including Indigenous data governance, is essential to control how AI tools behave. While these areas may not receive as much attention as AI technologies, they are fundamental to creating AI solutions that benefit our communities. For example, AI initiatives like the Pama Language Centre’s AI project applied transfer learning to Cape York languages and Western Yalanji Aboriginal Corporation equipping Indigenous rangers with autonomous drone technologies demonstrate how strategic investments in data and AI can empower local communities and help preserve cultural heritage.

In addition to adopting commercial off the shelf AI tools, governments need to adapt AI solutions to their decision context and invest in their own AI tools. Government should consider directing resources towards building robust data relationships with stakeholders including rural, remote, vulnerable and marginalized communities. This approach would strengthen our national culture, identity, and capability ensuring that future AI tools serve to advance Australia’s unique needs and values. Maintaining a focus on good data is vital for our national integrity and progress in the AI era. Existing resources, tools and data can be made more accessible when integrated with AI solutions, potentially enhancing regulatory practices and communication.

Will AI give regulators a productivity boost?

Regulators should be aware of the limitations of LLMs when it comes to assisting with their tasks. For example, generating meeting summaries or minutes from meeting transcripts may initially seem like a productivity boost, but AI-generated minutes often lack the nuanced political context and individual perspectives critical for accurate and meaningful records. These AI-generated summaries can come across as generic and fail to capture the arguments and rationale behind decisions essential for justifying government actions and maintaining transparency.

While AI can also be helpful in converting conversational transcripts into more polished written prose, this process still requires significant manual intervention and does not necessarily save time. Additionally, relying too heavily on AI could harm a regulator’s credibility if the prompts are not carefully curated and output is not rigorously checked and corrected. While AI offers some benefits, it should be used cautiously and subjected to human oversight to ensure accuracy and completeness in official documents.

Finally, LLMs homogenise human writing, which can make recipients of LLM-generated writing feel disconnected from the sender—possibly even devalued. Writing is not just about precision, but about maintaining human relationships and trust. Traditional, non-AI based writing that reinforces the human connection remains paramount, particularly in post-covid remote work environments and predominantly digital interactions with the public.

AI may be eroding our ability to keep epistemically safe

Monitoring the use of AI by individuals and organisations is essential to maintain epistemic safety and protect the integrity of human cognitive processes. LLMs, by creating persuasive outputs, fundamentally alter the way we interact with information, potentially eroding critical thinking skills. Particularly when combined with pressure to be more productive because of the cost of the tools. This cognitive shift can lead to automation bias, where users place undue trust in AI outputs, using them without adequate scrutiny, compromising decision-making quality. The cognitive implications of AI use extend beyond privacy and security concerns, touching on the core of human relationships and societal structures. As AI continues to be integrated into daily activities, feedback on their use and adoption including controls must be instituted to ensure that AI tools augment rather than undermine human thought.

Why do we need to regulate AI?

Regulating AI is crucial for maintaining balance and fairness in our society. It ensures that power is not concentrated in the hands of a few major tech companies, which can operate with minimal oversight and potentially impose harmful practices. The Political Philosophy of AI (2022) highlights the importance of understanding who makes decisions about AI and digital platforms, and who has the authority to intervene. Another book, Cloud Empires: How Digital Platforms Are Overtaking the State and How We Can Regain Control (2024), discusses how powerful online platforms often act autonomously making it difficult for governments and societies to influence their actions effectively. Lack of regulatory oversight can lead to scenarios where international tech giants dominate communication and digital services and Australians without accountability. For example, during the rollout of the COVIDSafe contact tracing app 2020, Apple and Google demonstrated their dominance by disregarding the Australian government’s efforts, showcasing the limitations of national power against global tech corporations.

Thinking ahead

In the next year or two, we can expect significant developments in the regulation of artificial intelligence in Australia. After an extensive consultation on safe and responsible AI in 2023, the Australian Government is now in a process of determining AI guardrails that may lead to both soft and hard controls such as: best practice guides, frameworks, reinforcing or adapting existing regulation and potentially creating AI-specific regulation. We can expect regulatory pressure on data, digital platforms, people, AI use, and methods to develop, test and certify AI systems.

The recent adoption of the National Framework for the Assurance of AI in Government: A joint approach to safe and responsible AI by the Australian, state and territory governments (2024) exemplifies a collaborative and flexible approach to AI regulation that achieves both consistency and autonomy to government organisations.

There is likely to be a consolidation of shared approaches to AI risk management across various sectors in the years ahead. Shared awareness will likely lead to increased AI risk literacy among employees and a more balanced relationship between innovation opportunities and risk management, fostering a safer and more progressive environment for AI development in Australia.