Read time: 5 mins
Based on ChatGPT in public policy teaching and assessment: An examination of opportunities and challenges by Daniel Casey, published May 2024.
Researchers from ANU have investigated ChatGPT’s potential role in public policy. They tasked students to use the tool to write a policy brief to an Australian Government minister and report on their experience. The findings revealed the shortcomings of ChatGPT for policymaking and highlight the need the role of the new technology in teaching and evaluation.
Read time: 5 mins
Based on ChatGPT in public policy teaching and assessment: An examination of opportunities and challenges by Daniel Casey, published May 2024.
1
Students were initially optimistic about ChatGPT’s abilities. But they became more sceptical after the exercise, as the tool couldn’t produce something they could present to a minister.
2
ChatGPT-generated briefs misrepresented data, contained factual errors, weren’t responsive to the policy environment, and struggled to respond to prompts about Indigenous Australians.
3
The findings highlight the importance of educating students about the limits of AI so that it can be used ethically and appropriately.
The launch of ChatGPT in late 2022 led to an explosion of interest in artificial intelligence (AI), and policymakers were no exception.
As part of a public policy course, ANU students were tasked with using ChatGPT to write a policy brief for an Australian Government minister.
They then reflected on the process individually and in groups, and researchers then analysed their experiences to shed light on how ChatGPT might be used in policymaking.
At the start of the process, students agreed that ChatGPT would benefit public servants, potentially saving time and thinking for certain tasks.
However, by the end, they were pessimistic about the potential for ChatGPT to contribute meaningfully to policy development, at least any time soon.
According to the research, while ChatGPT produced something that ‘looked good’ and ‘read well’, students felt the text was shallow, and didn’t provide anything that could be called analysis. They also found that its briefs often contained factual inaccuracies and misrepresented data.
For example, when a student requested that ChatGPT add statistics to the brief, and then prompted it to verify them, its output was that the statistics were “simulated or fictional statement(s) created for the purpose of the policy brief”.
ChatGPT especially struggled with a ‘First Australians Impact Assessment’, likely because the views of marginalised groups such as Indigenous Australians are underrepresented in its training data.
It also failed to adapt its output to the political environment, according to the study. One student prompted ChatGPT to re-generate a brief through a ‘conservative’ or ‘progressive’ lens. In response, ChatGPT produced the same brief but added the words ‘progressive’ and ‘progressivism’ a
combined 27 times in a 600-word brief, without any substantive amendments.
Similarly, when asked for a ‘conservative’ perspective, ChatGPT simply removed all occurrences of the word ‘progressive’. In 18 places, it had been replaced with ‘conservative’.
After the experience, students supported greater use of ChatGPT in classrooms. According to the findings, education on appropriate, ethical engagement with AI is needed to mitigate the risk of AI-generated misinformation affecting public policy. Students and policymakers need to become more familiar with ChatGPT and similar tools, in order to identify where it can work well, and where to avoid it.
All students said that while ChatGPT produced something that ‘looked good’ and ‘read well’, the text was shallow, and didn’t provide anything that could be called analysis.