Text size: A A A

Navigating AI adoption as it outpaces regulation

Business transformation within government departments is almost constant. While departments look to accelerate their rate of transformation and remain relevant through the introduction of AI technologies, the lack of existing governance means increased risk to safe, secure, and trusted delivery of public administration, public policy, and public services offered in delivering the government’s agenda. The rapid adoption of AI has outpaced the existing legislation, creating a high-risk environment for government leaders when looking to use AI technologies to accelerate the transformation of their organisations.

AI has been used or is planned for use by government entities throughout Australia in a range of areas, such as AI-based traffic monitoring systems and chat-bots; fraud prevention and law enforcement; national security and defence; detection of physical and cyber security breaches; AI-augmented scenario training using virtual reality and simulations; identification and monitoring of threatened species; health services; and public transport services to mention a few.

The guardrails are coming too slowly

The federal government has recently consulted on how it can mitigate any potential risks of AI and support safe and responsible AI practices with the Safe and Responsible AI in Australia discussion paper released in June. The UNHCR responded to emphasise the need to focus on human rights in the areas of privacy, neurotechnology, automation bias, metaverse, chatbots, misinformation and disinformation, and algorithmic bias.

These consultative processes and the resultant legislative changes are necessary but time-consuming, meaning the speed of adoption of AI technologies can out-strip our ability to govern it effectively to ensure the development and adoption of trusted, secure, and responsible AI, and to protect against malicious use of the technology.

AI technologies such as machine learning, natural language processing, data labelling platforms, and predictive analytics can dramatically accelerate the rate of transformation. Implemented effectively, AI can generate benefits for the government sector in many ways, such as offering convenience, accessibility, automation, and efficiency, increasing innovation, personalising customer experiences, enhancing public and customer insight, reducing cybersecurity threats, and minimising costs.

In the absence of sufficient existing legislation, policy, and governance, it falls to government agency leaders and decision-makers to be proactive in their oversight, safety, privacy, transparency, and accountability in the adoption and use of AI technologies.

Steps to safe use of AI

Ethics must serve as the foundation of AI governance, and ethical frameworks should be established to guide the development, deployment, and use of AI technologies within government agencies. These frameworks should encompass principles such as transparency, accountability, fairness, and privacy. Leaders should voluntarily adopt and enforce The AI Ethics Framework, which consists of principles to ensure AI is safe, secure, and reliable.

A risk-based approach to how the technology is used in practice should be considered. While AI can revolutionise business transformation, it also introduces new risks. Leaders and teams must be vigilant in assessing AI-related risks, whether they pertain to data security, algorithmic biases, or regulatory compliance. By identifying potential pitfalls, anticipating unintended consequences, acknowledging these risks, and integrating mitigation strategies, potential disruptions and reputational damage can be minimised during transformation activities.

Data is the lifeblood of AI systems, and its management is critical to AI governance. Government departments must adhere to stringent data protection and privacy regulations, such as the Australian Privacy Principles (APPs), and establish robust data management practices, including data collection, storage, sharing, and encryption protocols.

AI algorithms should be made transparent and explainable, allowing for clear comprehension of how decisions are made. This will aid in improving public trust. Additionally, accountability mechanisms must be established to assign responsibility for AI-related outcomes, both positive and negative. Ensuring fairness in AI systems is essential to prevent biases that could adversely impact the public.

AI technologies are dynamic and ever-evolving. To remain informed Government departments must commit to continuous learning and adaption as part of their AI governance strategy. Regular training and upskilling of staff on AI concepts, benefits, risks, and ethical considerations are essential to maintain a well-informed workforce. Additionally, departments should actively engage with the AI research community, attend conferences, and participate in discussions to stay updated on the latest advancements and best practices. This proactive behaviour ensures that agencies remain at the forefront of AI governance and are equipped to make informed decisions during business transformation.

Harness the potential of AI safely

The integration of AI during business transformation within government departments holds immense promise for enhancing services, optimising processes and driving innovation. However, the rapid adoption of AI has outpaced the government’s ability to adequately regulate and govern the responsible and effective use of AI, nor guard against any malicious use of the technology.

In the interest of public trust and safety and to deliver the government’s policy agenda efficiently, agencies must proactively and voluntarily adopt an ethical, risk-based approach that prioritises data management and continuous learning. Through these measures, departments can harness the potential of AI to drive meaningful and sustainable business transformation in service of the government and the Australian people.

The magical (and contested) road ahead for government AI

The ATO has been pioneering machine learning for years. Playing the long game with cohesive data architecture and ethics is now paying off.
The APS needs to strike a delicate balance between appropriate regulation, relevant governance and innovation — and commit to rapid upskilling of the workforce.
ptsd ai
There have been exciting advances in artificial intelligence and machine learning supporting mental and physical health of those in the armed forces.
Before an AI makeover, there needs to be more oversight, transparency and a strong dose of AI literacy.
Before going all-in on artificial intelligence, experts believe government organisations must set the rules of engagement.
AI's rapid adoption has outpaced the government’s ability to adequately regulate and govern its responsible and effective use, or guard against malicious use of the technology.
Education experts are optimistic about a future where artificial intelligence is used as a tutor, not a short cut.
On the eve of a global summit on artificial intelligence, world leaders are expressing concerns about what bad actors could do with advanced technology.
Balancing the cautious pace of public sector decision-making with the rapid evolution of technology demands a paradigm shift.
While AI may be the word on everyone's lips right now, global research shows that the technology is not that well understood. If it is going to be used effectively, that has to change.
AI may appear intimidatingly advanced but it is simply part of the 'computing revolution', and viewing it as such enables us to work with it more effectively.
ai risk
What is needed? Policy, regulatory and legislative mechanisms for developing artificial intelligence and machine learning. Easy to say but challenging to do.