Text size: A A A

The oblique state of ‘explainable’ AI in government

Artificial intelligence has the potential to assist in faster and more accurate decision-making for a range of outcomes, especially in health and defence.

Important for its success is trust in AI processes. This means ensuring processes used for decisions and outcomes are clearly understood by those using the technology and impacted by it.

Global research on AI awareness, however, shows that the technology is not well understood. A global study on Trust in Artificial Intelligence by the University of Queensland found that while the majority of those surveyed have heard of AI, less than half are clear on how and when AI is being used. Australians were the second lowest country surveyed to show an interest in learning about the technology.

More concerning, a report on the State of AI Governance in Australia by the University of Technology Sydney has found understanding of AI by corporate leaders of Australian organisations currently deploying AI models is poor.

“Building awareness of where, how and why AI is being used is a major challenge both within organisations and for consumers and the public,” Lauren Solomon, program manager of AI governance with the Human Technology Institute, told The Mandarin. “Currently, we lack transparency about when an AI system is being used and the appropriate pathway for consumers and the public to challenge AI-informed decisions or seek redress.”

Solomon believes increasing transparency about AI system use is crucial to support explainability as the policy community grapples with the best way to effectively govern and regulate this technology. But while reforms are being rapidly deployed globally, she warns explainable AI is a long way off.

“We’re all learning as we go,” Solomon said.

What is ‘explainable’ AI?

Scientific research being undertaken into “explainable” AI aims to develop methods for making AI systems more transparent so decision-making processes can be better understood. This is of particular importance to sectors such as healthcare where the decision-making processes may have a life-or-death outcome and need to be clearly explained.

“Explainable AI includes techniques and processes that can assist organisations and regulators look under the hood of machine learning algorithms and better understand the rationale behind the output,” Solomon said. “Explainability can be improved both by building transparent-by-design models and post-hoc explanations for decision-making processes which can be understood by humans.”

For the most complex situations, explainable AI is still a work in progress.

“Some of the most advanced AI algorithms, especially deep learning models, are essentially black boxes,” Dr Steve Lockey, research fellow at The University of Queensland Business School, told The Mandarin. “The architecture of deep neural networks, with potentially millions of parameters, doesn’t lend itself to easy interpretation.”

Prioritising models that are explainable, Lockey said, may instead require using simpler processes such as linear regression or decision trees. “Essentially, there’s often a trade-off: more accuracy might mean less explainability and vice versa.”

According to Solomon, it is ultimately a design choice to develop and use AI systems that are explainable. “Our HTI co-director Sally Cripps dedicates her career to the development of fit-for-purpose data science techniques that aim to deliver accurate and explainable insights,” she said.

What is the state of play for explainable AI?

When it comes to explaining complex AI models, software will be required to enable the transparency required. However, research by Daniel Varona and Juan Luis Suárez at Western University, Canada has found a number of barriers still need to be resolved to achieve this. Importantly is a lack of standardisation in the field of explainable AI, with the research highlighting a range of techniques and approaches being used.

“This can make it difficult to compare and evaluate the effectiveness of different approaches and can hinder the development of best practices and guidelines for explainable AI,” Varona explained in a recent blog post discussing his research.

Regulation may instead help achieve standardisation. Internationally and locally, transparency and explainability are core objectives of regulatory reform and governance frameworks.

“For example, we’re seeing increased regulatory requirements for AI systems to be placed on public registers, or via direct to public and consumer disclosure requirements such as the GDPR [European General Data Protection Regulation] where data subjects have a right to explanation,” Solomon said.

The NSW government AI Assurance Framework is providing guidance for government agencies using AI systems to assess risk factors and harms associated with a lack of transparency.

“This includes a transparent explanation of the factors leading to an AI decision, consultation with impacted communities, publication of scope and goals of AI systems, and appeals processes for impacted communities,” Solomon said.

At the federal level, the Supporting Responsible AI discussion paper released in June has been praised for its discussion of regulations required to build trust and understanding in AI processes.

“The government’s intent to develop a clear set of regulatory and governance mechanisms to support and guide the trustworthy use of AI to protect Australians from harm and strengthen public trust is commendable,” Lockey said.

“I support a cohesive, risk-based regulatory framework for governing AI, and am pleased to see the government is considering this kind of approach. This needs to have some flexibility, as the AI landscape is changing so quickly that it can’t be static.”

Role of government in trustworthy and explainable AI?

For using AI in government, Lockey believes progress to transparency can begin by clearly explaining how and why models are being used as part of their service delivery.

“From a pragmatic point of view, I think — in most cases — it’s more important for government to be transparent about its uses of AI rather than taking a myopic focus on trying to make the AIs themselves understandable,” Lockey said. “By that, I mean the public should know if and how an AI is being used to make decisions that impact them — including what characteristics are included in modelling.”

For example, he said, if an AI system is used to allocate public housing, citizens must be informed about its use and made aware of the specific factors the AI considers when determining eligibility and prioritisation, such as income, family size, previous living conditions.

“Even if the average person doesn’t understand the intricacies of the AI’s algorithms, they should at least be informed about the general principles guiding its decisions. This way the public can have some level of assurance that the system is being used fairly and can hold the government accountable if it seems to be acting unjustly.”

It is also important to ensure how AI is used and communicated can evolve with the technology.

“I don’t know what this landscape will look like this time next year, but I venture to guess that it will be very different from today — as today’s landscape is very different to this time last year,” Lockey said.

The magical (and contested) road ahead for government AI

The ATO has been pioneering machine learning for years. Playing the long game with cohesive data architecture and ethics is now paying off.
The APS needs to strike a delicate balance between appropriate regulation, relevant governance and innovation — and commit to rapid upskilling of the workforce.
ptsd ai
There have been exciting advances in artificial intelligence and machine learning supporting mental and physical health of those in the armed forces.
Before an AI makeover, there needs to be more oversight, transparency and a strong dose of AI literacy.
Before going all-in on artificial intelligence, experts believe government organisations must set the rules of engagement.
AI's rapid adoption has outpaced the government’s ability to adequately regulate and govern its responsible and effective use, or guard against malicious use of the technology.
Education experts are optimistic about a future where artificial intelligence is used as a tutor, not a short cut.
On the eve of a global summit on artificial intelligence, world leaders are expressing concerns about what bad actors could do with advanced technology.
Balancing the cautious pace of public sector decision-making with the rapid evolution of technology demands a paradigm shift.
While AI may be the word on everyone's lips right now, global research shows that the technology is not that well understood. If it is going to be used effectively, that has to change.
AI may appear intimidatingly advanced but it is simply part of the 'computing revolution', and viewing it as such enables us to work with it more effectively.
ai risk
What is needed? Policy, regulatory and legislative mechanisms for developing artificial intelligence and machine learning. Easy to say but challenging to do.