Guest editor: Dr Bernadette Hyland-Wood, PhD, AFHEA (Indigenous Knowledges)
The objective is clear — to ensure AI technologies benefit all Australians while mitigating adverse impacts of unintended harms on individuals and the economy. However, the specifics for implementing this are a work in progress. The landscape for using artificial intelligence (AI) reflects a dynamic interplay between innovation, risk management, and ethical oversight. At the October 2024 ANZSOG-auspiced National Regulators Community of Practice (NRCoP) workshop, one regulator said, ‘Whether you like it or not, it’s coming.’ Others expressed concern regarding safe and responsible practices as public servants embrace AI for operational efficiencies, such as facilitating customer inquiries or complaint handling.
Others shared concerns about customer privacy and security, ethical issues in decision-making, and what some have seen in the last year– a general overreliance on AI applications. The reality of AI systems automating routine compliance checks, ensuring accuracy in auditing and reporting, and performing fraud detection will remain next year’s work.
The current state of play regulating AI in Australia can be summed up as a work in progress. Australia has moved beyond generic technology regulation and is considering AI-specific laws to address the unique challenges and opportunities posed by AI technologies. The government closely monitors AI policies and international standards, particularly the European AI Act, which advocates for trustworthy AI through fairness, transparency, and accountability.[1] Participation in G7-led discussions on global AI standards will help Australian policies align with those of key trading partners and allies.
The Commonwealth government is considering regulating AI applications based on their risk levels. The Department of Science, Industry and Resources (DSIR) is establishing guidelines for the ethical development, deployment, and monitoring of AI technologies.[2] DSIR has indicated a commitment to balance innovation with the need to protect privacy, ensure fairness, and mitigate risks associated with high-risk applications of AI.
High-risk uses include critical infrastructure, healthcare, financial services, law enforcement, national security, and public safety systems. In a recent Response to Proposals Paper on introducing mandatory guardrails for AI in high-risk settings, I recommended that the definition of high-risk use cases be broadened to include AI systems used for decision-making, particularly involving vulnerable populations, including Aboriginal and Torres Strait Islander peoples.[3]
An update to the Privacy Act 1988 includes new provisions tailored for AI systems, such as:
- Enhanced requirements for informed consent in AI-driven data collection and processing.
- Mandatory disclosures for automated decision-making, ensuring transparency when AI is used to make decisions impacting individuals (e.g., in finance or hiring).
- New rights for individuals to challenge decisions made by AI systems and seek human intervention.
Notably, there is a focus on AI assurance frameworks—tools to assess AI systems’ reliability, fairness, and transparency are gaining traction. These mechanisms aim to build trust by certifying AI systems, particularly those used in critical sectors such as healthcare, finance, government services and defence.
Australian regulators are engaging with frameworks like the Framework for AI Risk Assessment (FAIRA) to address AI’s societal and economic risks.[4] These initiatives provide a structured approach to identifying and mitigating AI risks. FAIRA is complex and requires considerable expertise and cooperation between platform vendors, IT departments, and information and security officers. It will take years before FAIRA is integrated and business as usual for government.
How pre-approval, certification or mandatory impact assessments will be achieved to avoid potential harm due to insufficient, incorrect, incomplete data or algorithmic bias remains to be solved. Modern software development is massively abstracted. AI systems are a complex combination of software components and data with sometimes unknown provenance. There is considerable work to be done, and a co-design approach involving regulators, software developers, data scientists, and communities impacted by AI systems should be a priority.
The QUT Centre for Data Science[5], QUT Digital Media Research Centre[6], and the ARC Centre of Excellence for Automated Decision-Making and Society[7], along with industry and civil society, are contributing to AI policies and frameworks through engagement in AI advisory boards, roundtables, workshops, and public submissions.
Queensland’s QChat
At the October 2024 CoP workshop, we heard about the pioneering work underway with QChat, the Queensland Government’s AI Assistant. QChat is a closed instance of ChatGPT and provides a text-based virtual assistant with cutting-edge generative AI technology. As of this writing, QChat has more than 7,000 authorised users with a high percentage of returning users. The AI Unit is focused on how public servants can safely work with AI. The team is actively determining how to assure AI products are fit for purpose, and that risk is managed.
Moving forward
Public servants are adopting AI systems at speed. Some are accelerating the experimentation lifecycle using AI with attention to digital capability uplift programs (e.g., self-paced, one-on-one, and group-based activities). Others are focused on developing measures to mitigate risk and ensure transparency and accountability of AI systems. While Australia is proactive in monitoring EU and OECD developments, our challenges will continue to be:
- Leveraging interoperable frameworks for data sharing and AI governance.
- Fostering innovation in AI with balanced regulation.
- Addressing the skills gap in AI literacy and governance expertise.
In summary, as we wrap up 2024, regulators are actively navigating the AI landscape. Fostering cross-sector collaboration and focusing on ethical, safe, and accountable AI use will require ongoing engagement at all levels of public service, civil society and communities such as the ANZSOG National Regulators Community of Practice.
Resources
Australian Human Rights Commission. Adopting AI in Australia. 2024. https://humanrights.gov.au/our-work/legal/submission/adopting-ai-australia
Australian Government Department of Industry, Science and Resources. Safe and responsible AI in Australia Consultation, Australian Government’s interim response. 2024. https://apo.org.au/node/325498
Devitt, K., Gan, M., Scholz, J., and Bolia, R., Technical Report: A Method for Ethical AI in Defence. Publication numbers DSTG-TR-3786. https://www.dst.defence.gov.au/publication/ethical-ai
Hyland-Wood, B., Snoswell, A., et al. (2024). Response to proposals paper on introducing mandatory guardrails for AI in high-risk settings. https://apo.org.au/node/328698
Queensland Government, FAIRA Framework: Foundational artificial intelligence risk assessment framework. 2024. https://www.forgov.qld.gov.au/information-and-communication-technology/qgea-policies-standards-and-guidelines/faira-framework
[1] The European Union Artificial Intelligence Act. https://artificialintelligenceact.eu
[2] Australian Government Department of Industry, Science and Resources. Safe and responsible AI in Australia Consultation, Australian Government’s interim response. 2024. https://apo.org.au/node/325498
[3] Hyland-Wood, B., Snoswell, A., et al. (2024). Response to proposals paper on introducing mandatory guardrails for AI in high-risk settings. https://apo.org.au/node/328698
[4] Queensland Government, FAIRA Framework: Foundational artificial intelligence risk assessment framework. 2024. https://www.forgov.qld.gov.au/information-and-communication-technology/qgea-policies-standards-and-guidelines/faira-framework
[5] QUT Centre for Data Science, https://research.qut.edu.au/qutcds/our-work/
[6] QUT Digital Media Centre, https://research.qut.edu.au/dmrc/programs/
[7] ARC Centre of Excellence for Automated Decision-Making and Society, APO Collection: https://apo.org.au/collection/316968/automated-decision-making-society
Related news and media
Extending regulatory public value in Australia
Improving regulatory language: Its contribution to regulatory capability
ANZSOG report into governance of the Queensland Building and Construction Commission
Extending regulatory public value in Australia
Improving regulatory language: Its contribution to regulatory capability
Creating opportunities for regulators to collaborate – reflections from the COVID-19 frontline