How can government use AI responsibly?

Australia’s new Policy for the responsible use of AI in government is a chance for agencies to build for the future. In the first of a series of articles on navigating this moment of transition, ANU experts Maia Gould and Ellen O’Brien break down its three-phase framework and highlight the risks and opportunities the policy presents.

Read time: 7 mins

By Maia Gould and Ellen O’Brien, researchers at ANU School of Cybernetics.

The Australian Government’s Policy for the responsible use of AI in government has just come into effect, starting on 1 September 2024.

The policy lands concurrently with the Department of Industry, Science and Resources’ release of the Voluntary AI Safety Standard and a mandatory AI guardrails paper, which agencies should also consider when implementing the policy.

The policy introduces requirements for agencies to appoint, within 90 days, an ‘accountable official’ responsible for implementing the policy, and, within six months, publish a ‘transparency statement’.

This is an opportunity for public service leaders. Implementing the policy is a chance to deeply engage with how AI will change their organisations. There are several actions worth considering.

Embrace the ‘applied governance’ approach to AI

Ethics has been central to public debate about AI for many years, with the Australian Government publishing its first Artificial Intelligence Ethics Framework in 2019.

There has been a recent move towards referring to ‘responsible AI’ rather than ‘AI ethics’, in an attempt to make implementation more achievable. However, a widespread translation of these principles into action remains elusive.

The new policy moves government towards putting AI ethics principles and risk frameworks into action through an ‘applied governance’ approach.

To make the most of its implementation, agencies should embrace this, and ensure responsible AI objectives are embedded in core, everyday activities. They should be resourced appropriately and reviewed regularly.

This will allow organisations to not simply implement AI effectively, but to lead AI.

Decide what’s needed from an AI leader

The policy requires that agencies appoint ‘accountable officials’. It’s up to the agency whether this person – or team – is drawn from a technical or business role.

This indecision is common in industry too, where businesses seem unsure where AI leadership fits in an organisation: should it be left to subject matter experts, or sit with strategic leadership roles?

It can be tempting to leave strategic decision-making about AI with those who ‘understand the technology’. In the case where no one in the organisation truly does, decision-making can be effectively relinquished to external vendors. But both these situations can lead to undesirable outcomes.

Treat leading AI as a system-wide objective

Many AI leadership decisions need to be made at a strategic level. This is because AI adoption isn’t just about the technology, as the policy acknowledges. It’s first and foremost about how AI can be used in a specific context to help the organisation achieve its objectives.

With these ideas in mind, agencies will be ready to take the next step. This is where the policy’s three-phase framework for implementation comes in.

 

Phase 1: Enable and prepare

First, agencies appoint an accountable official and prioritise training in AI fundamentals for all staff.

AI is a large and complex topic, and risky and harmful uses of AI are often caused by a lack of knowledge. Better information and more open communication about AI can mitigate this risk.

What should be included in a training package on the ‘fundamentals’ of AI?

  • Breaking down the AI lifecycle, from development through to decommissioning.
  • Seeing AI as made up of components and understanding those components.
  • Recognising that AI is in constant interaction with changing systems and variables, including people, regulations, laws, politics, and the environment.
  • Understanding the crucial role data plays in AI, and how data risks determine AI risks.
  • Acknowledging that AI systems change over time, and that governance and risk management must accommodate this variability.
  • Exploring the different ways people understand AI and integrating their varying perspectives.
  • Perhaps most important of all, making AI fun. Starting from a place of curiosity will lead staff to seek more knowledge about AI of their own initiative.

 

Training in these AI fundamentals will help agencies achieve the goals of the new policy. Given how many Commonwealth officials are already well-versed in AI, there’s also value in fostering a teaching and learning community between them.

 

Phase 2: Engage responsibly

The policy instructs agencies to address AI risks with proportional, targeted mitigation. In this phase, leaders should use their organisation’s improved AI literacy to scenario-test their specific AI use cases.

Relying on general risk categories will only go so far. Agencies will need to understand the risks at play in the specific AI system they use, in the context in which it’s deployed.

The policy also requires the use of AI to be ethical, responsible, transparent and explainable. This phase is a good moment to interpret this requirement and align the agency’s use of AI with its broader purpose and strategy. This needs to be done in its specific organisational context and be used in its plan.

 

Phase 3: Evolve and integrate

In the third phase of implementation, responsible use of AI should become a living process.

The central danger of implementing the new policy is to try to ‘set and forget’ the responsible use of AI. But its implementation is an opportunity to do better than that: Building systems that monitor AI use, and prepare it to adapt to change.

Imagine the difference between a fad diet and a change in lifestyle. Often, it’s better to identify a few things that really can change and invest in those first. Then, those initial wins build momentum and create lasting change. Responsible AI is no exception. Agencies that take this policy as an opportunity to up their game will be the ones that reap the time-saving rewards AI has to offer.

Authored by

Ms

ANU School of Cybernetics

Ms

ANU School of Cybernetics