Global AI regulation is spreading quickly. How can leaders get ready?
In the third article of their responsible use of AI series, ANU experts Ellen O’Brien and Sarah Vallee stress the need for policymakers to see AI regulation in its international context
Read time: 4 mins
This is the third and final article in our series on implementing the Australian Government’s Policy for the responsible use of AI in government by experts at ANU School of Cybernetics. You can read the first article in the series here and the second here.
As government agencies wrangle their AI use to comply with Australia’s Policy for the responsible use of AI in government, they should consider global regulations.
This means following Australian law. But crucially, it also refers to the wave of AI regulations reaching shores all over the globe. Agencies should ride this wave by voluntarily adopting AI standards gaining traction overseas.
Doing so would make them early adopters at the cutting edge of AI regulation. It would also help them navigate a storm of proposals, regulations, and standards. To ready the ship for the journey ahead, let’s survey the landscape of global AI governance.
Take the European Union’s AI Act, which came into effect in August last year. It aims to lead by example in minimising AI risk. Similar frameworks apply in Canada, the UK, and parts of the US.
In all these efforts, several common types of AI regulation risks have emerged:
Risks to individuals such as algorithmic decision-making excluding people from employment or eligibility for a government payment.
Risks to demographic groups, especially global indigenous peoples. In Australia, the AI tool ChatGPT has been shown to reproduce bias against First Nations Australians when prompted to write a ministerial brief.
Risks to information, such as AI ‘hallucinations’ inadvertently producing false information that is then reproduced.
Risks to the environment, from the high energy and water use of AI systems.
Risks to intended use, where a program’s outputs go beyond its regulated use case. For instance, users misusing tools to create disinformation, or datasets which contain personal information.
A series of policy statements and regulatory tools, especially the recently released Voluntary AI Safety Standard and mandatory AI guardrails papers, address these risks. And in even better news for agencies, additional standardised approaches are ready for mass adoption too.
Specifically, the International Organization for Standardization (ISO)’s AI Management System Standard has been designed to address organisational AI risks. Standards like these offer organisations a ‘one-stop regulation shop’ worth considering.
And the ISO standard isn’t just ready for implementation. It’s already being adopted by organisations in several jurisdictions, including the European Union. It features prominently in the mandatory guardrails paper published by the Australian Government.
Additionally, ISO has published 32 other standards on AI risk management, functional safety, the treatment of unwanted bias, trustworthiness, data quality, governance, etc.
By adopting standards early, agencies will get access to a community of support that can help them mitigate risk – when the tool is in use, but also at procurement, an area where guidance might be missing at present, as the Digital Transformation Agency identified.
Of course, aligning the sails of local AI governance with global winds is just one step to responsible AI regulation.
To make key decisions on AI governance, agencies will need to go further. They must recognise that in-house expertise matters. Ultimately, agencies themselves will determine whether a system can be responsibly deployed.
The benefits of early adoption will be invaluable to this process, even more so now that the Senate Committee on Adopting AI has recommended a new, whole-of-economy dedicated legislation to regulate high-risk AI, similar to the European and Canadian approaches.
Policymakers can use international standards to truly become AI leaders and lay the foundation for responsible use of AI across the entire Australian economy.