Text size: A A A

Artificial intelligence will transform the public sector, but at what cost?

Almost a year after ChatGPT ignited an AI gold rush, is the public sector ready for an AI makeover?

Computer vision, natural language processing (NLP) and autonomous robotics can transform defence, healthcare, transportation and policy making.

Since ChatGPT’s release in late 2022, governments around the world have grown hungry for using and regulating AI, but the Australian public sector has some hurdles to overcome before it can reap the rewards of AI while also assuring the public it will use AI responsibly, legally and ethically.

The government still needs to pass amendments to meet the robodebt royal commission’s recommendations on automated decision-making (ADM). Experts say the government needs more oversight, transparency and a strong dose of AI literacy.

Since 2018, when cloud giants launched prepackaged AI services, the public sector has leaned towards computer vision. Scientists use vision AI with drones to track endangered species. NSW Police use it to analyse CCTV and bodycam footage. Victoria Transport’s vision AI is a force multiplier for human eyes on traffic from 1,200 CCTVs. Queensland, Victoria and NSW flag potential driver mobile phone violations and, nominally, only issue penalties after human verification.

“Computer vision has been really popular in government because you don’t have to deal with the privacy concerns of citizen data and you can use it for everything from bridge inspections with drone technology to finding sharks as part of surf patrols,” says Dean Lacheca, a VP analyst for Gartner’s CIO advisory service.

But to pursue AI on a grander scale, the government will need to consider how it would responsibly deploy an AI-powered robodebt or AI using health data. AI development is different to application development. AI governance will include defining use cases, data collection, data provenance, bias risks, and security controls. Will staff have the skills to know if a new AI is operating as expected?

The government is considering new AI legislation, but Jeannie Paterson, professor of law and co-director of Melbourne University’s Centre for AI and Digital Ethics, says departments don’t need new laws, pointing to NSW’s administrative steps under its AI assurance and audit framework.

“Robodebt showed that stronger oversight obligations, better governance and just a higher level of AI literacy would go a long way in government. We need expertise as much as anything,” Paterson said. “The government should be the most transparent and most responsible, and it doesn’t need laws to do that.”

Gartner’s Lacheca says the state of AI in government is a consequence of executives punting AI literacy to IT.

“Unless you were one of the execs that had an AI project your minister wanted to announce, it was pretty much the techoes talking about what it could do and coming up with narrow use cases,” he says.

Public service minister Katy Gallagher could improve AI literacy via her cross-agency AI taskforce to identify use cases and guardrails. Australia’s AI guidelines for public servants states government information entered into ChatGPT or Google Bard must be restricted to information in the public domain. The UK’s guidance explicitly warns against entering new policy positions in these services.

To ensure accountability, a “human-in-the-loop” (HITL) model will likely underpin most government AI deployments. According to Stanford University, HITL models assume humans are meaningful to AI optimisation rather than an obstruction to it.

But the NSW Ombudsman’s 2021 report on AI-influenced maladministration acknowledged “creeping control” in HITL where humans increasingly trust a machine’s output despite conflicting evidence or avoid overriding an AI decision to minimize accountability.

Future oversight may need to interrogate control creep in situations like Queensland’s two-year pileup of 1,800 penalties and licence suspensions for AI wrongly detecting drivers using smartphones.

AI definitions include predictive analytics. NSW eHealth’s predictive risk scoring AI helps quickly detect sepsis. NSW Transport’s predictive occupancy data for public transport enabled COVID-19 travel notifications. CSIRO’s Data61 helped NSW develop a predictive decision support system to model road congestion.

“Generative AI in combination with analytics is likely to be where use cases really start to explode,” says Lacheca. For example, in call centre reporting, generative AI could define queries for top recurring issues and then generate reports. AI can also generate new documentation for old code.

“How do I de-risk a legacy system if I’m a government organisation? The biggest risk that I had with that old mainframe system is the fact that I’ve got no documentation,” he says.

NLP has made inroads into government. The Australian Taxation Office (ATO) launched its Alex chatbot in 2016 using NLP developed by Nuance, which Microsoft acquired for USD$16bn in 2022, shortly before its $10bn bet OpenAI’s “foundation” large language models (LLMs), GPT-4 and Dall-E.

Wherever there are large amounts of text or audio, NLP can be used for sentiment analysis, classification, machine translation, relationship extraction, speech-to-text, and summarizing. LLMs perform exceptionally well at NLP tasks.

The ATO is using NLP models to spot tax evaders within the Panama Papers. Before ChatGPT’s release, the Australian Communications and Media Authority (ACMA) studied NLP uses for regulators. With NLP, regulators processing documents at scale can quickly mine unstructured text for facts, relationships and sentiments on given topic areas.

The Australian Securities and Investments Commission (ASIC) and Australian Prudential Regulatory Authority (APRA) have early-stage in-house NLP capabilities. Both saw potential productivity gains, but experienced multiple challenges: they needed extra capacity and expertise to ensure AI apps performed as expected, and significantly more human training to manage new security and data bias risks.

Using generative or other forms of AI can also lead to far higher cloud computing costs. Microsoft 365 Copilot and Google Workspace Duet are both US$30 per user per month. The AI code generator service from Microsoft-owned GitHub costs $19 per user per month.

“If staff used the AI to save 20 minutes or 1% a week on a simple task, the cost of the AI licence would be more than justified,” says IBRS analyst Dr Joseph Sweeney.

IT departments could also see cloud costs rise as generative AI improves low-code tools. “We are going to see a massive increase in the consumption of cloud-based AI services, not just from official IT teams, but from citizen developers leveraging low-code,” says Sweeney.

But a government could pursue a whole-of-government plan to train and run an LLM on government infrastructure. Consultancy McKinsey outlines three generative AI consumption models. A CIO could buy an off-the-shelf solution like GitHub’s Copilot. A level up, they could fine-tune an existing model from OpenAI, Coherence, AI21 Labs or Hugging Face and then host it on-premises or in the cloud. The third and most expensive option is to build your own foundation model, like GPT-4.

“Only some vendors are going to let you create your own LLMs, and there’s going be a cost associated with it. By far it’s the most expensive approach but it’s the least applicable to most government organisations,” says Lacheca.

The magical (and contested) road ahead for government AI

The ATO has been pioneering machine learning for years. Playing the long game with cohesive data architecture and ethics is now paying off.
The APS needs to strike a delicate balance between appropriate regulation, relevant governance and innovation — and commit to rapid upskilling of the workforce.
ptsd ai
There have been exciting advances in artificial intelligence and machine learning supporting mental and physical health of those in the armed forces.
Before an AI makeover, there needs to be more oversight, transparency and a strong dose of AI literacy.
Before going all-in on artificial intelligence, experts believe government organisations must set the rules of engagement.
AI's rapid adoption has outpaced the government’s ability to adequately regulate and govern its responsible and effective use, or guard against malicious use of the technology.
Education experts are optimistic about a future where artificial intelligence is used as a tutor, not a short cut.
On the eve of a global summit on artificial intelligence, world leaders are expressing concerns about what bad actors could do with advanced technology.
Balancing the cautious pace of public sector decision-making with the rapid evolution of technology demands a paradigm shift.
While AI may be the word on everyone's lips right now, global research shows that the technology is not that well understood. If it is going to be used effectively, that has to change.
AI may appear intimidatingly advanced but it is simply part of the 'computing revolution', and viewing it as such enables us to work with it more effectively.
ai risk
What is needed? Policy, regulatory and legislative mechanisms for developing artificial intelligence and machine learning. Easy to say but challenging to do.