The three questions agencies must ask themselves to use AI responsibly

When it comes to implementing the Government’s new policy on the responsible use of AI, one element in particular will be critical: AI leadership. That’s especially true when it comes to the appointment of ‘accountable officials’. But what matters most when making these appointments? ANU experts Maia Gould and Ellen O’Brien explain.

Read time: 4 mins

This article is the second in a series on implementing the Australian Government’s Policy for the responsible use of AI in government by ANU experts Maia Gould and Ellen O’Brien. You can read the first article in the series here.

In our previous article about the responsible use of AI in government, we argued that agencies should focus on AI leadership in the appointment of their ‘accountable official.’ So what questions should agencies ask themselves when making these appointments?

On one hand, they could see the role administratively, looking to appoint strong project managers to keep the policy’s implementation on track.

On the other, they might view the opportunity more strategically, focusing on leadership that engages stakeholders and secures collective effort.

But in any case, there are three questions that prospective AI leaders will need to answer.

Question 1: What is your organisation-specific definition of AI?

A shared definition of AI is a necessary measuring stick for an accountable official. Without it, it’s impossible to assess whether, or how well, the agency is achieving its AI goals.

Generalisations and vague language are common in discussions of AI. But vagueness often creates risk, impeding innovation.

Achieving a shared definition of AI is critical not just for the accountable official but for everyone at the organisation. This means empowering people to have, and then ask, good questions about what AI is and how it works. Training that builds confidence and demystifies AI will be central to its responsible use.

 

Question 2: How are you expecting staff to access AI? And how will the policy’s implementation interact with established IT architecture?

AI can’t be simply bolted onto an organisation. AI is embedded in everyday processes. Additionally, agencies must account for AI use at both the enterprise and consumer level.

Typically, integrated AI is managed by a team, accessed through established information systems, and used behind-the-scenes to organise information. Consumer AI, on the other hand, is a term that describes AI tools used by staff for the sake of their own productivity.

But this line is becoming increasingly blurred. Tailored consumer-style products are being rolled out in enterprise software – Microsoft’s CoPilot or Google’s Gemini are the obvious examples – and are operating side-by-side with enterprise-wide systems. Often, both tools are accessing the same datasets.

Despite this – and perhaps because of it! – it’s worth distinguishing between use cases where AI is automating processes for a whole organisation, and those where individuals are using AI tools to increase their day-to-day productivity.

In each case, the level and kind of risk will be different, and special attention needs to be given to situations combining both. Recognising this distinction between enterprise and consumer AI will be helpful to agencies implementing the policy.

This is why one of the first actions an accountable official should undertake is an initial audit of AI use, weaving all these threads of AI use across an agency into one tapestry that leaders can examine.

This would meet the responsible use policy’s minimum requirement of a transparency statement, but agencies shouldn’t stop there. Taking an ongoing responsibility for the use of AI means opening formal communication channels that enable accountable officials to talk directly to staff making daily decisions about AI use, and vice versa.

Question 3: How will you balance the ‘here and now’ with the ‘there and then’? 

There is a cheeky aphorism that AI is best defined as ‘technology that isn’t here yet’. While generative AI may feel very present, there’s a grain of truth to this.

We haven’t seen major, enterprise-wide efficiency gains that have been promised by developers of AI tools – yet. What AI can be used for might change suddenly, but many applications haven’t replicated the success of its best use cases.

This makes accountability for AI implementation difficult, because teams are managing the risks posed by existing capabilities, and possible future ones.

Right now, many organisations are scrambling to build a coherent approach. They are trying to manage established enterprise-level AI use, while trying to prepare for a future where many more staff use consumer AI at work.

To borrow a phrase from author Dan Davies’ book The Unaccountability Machine, they need to balance the ‘here and now’ with the ‘there and then’.

Responding to AI ‘here and now’ means supporting teams to locally manage emerging uses, in line with overarching principles and policies. Preparing for ‘there and then’ demands a different suite of networks and skills like strategic foresight and imagination. They also operate on different timeframes.

Not only will an effective implementation plan need to treat them differently, they may need separate approaches altogether, led accordingly by different teams. In any case, it’s worth considering how accountable officials will connect and coordinate these teams, survey these different time horizons, and have foresight of incoming policies that could affect their work.

Leading AI well means improving capability across the organisation and ensuring that teams responsible for AI can communicate freely. While there’s no silver bullet for efficiently implementing the responsible use policy, setting aside the resources to get started early will minimise disruption.

Authored by

Ms

ANU School of Cybernetics

Ms

ANU School of Cybernetics