Navigating AI adoption as it outpaces regulation
Business transformation within government departments is almost constant. While departments look to accelerate their rate of transformation and remain relevant through the introduction of AI technologies, the lack of existing governance means increased risk to safe, secure, and trusted delivery of public administration, public policy, and public services offered in delivering the government’s agenda. The rapid adoption of AI has outpaced the existing legislation, creating a high-risk environment for government leaders when looking to use AI technologies to accelerate the transformation of their organisations.
AI has been used or is planned for use by government entities throughout Australia in a range of areas, such as AI-based traffic monitoring systems and chat-bots; fraud prevention and law enforcement; national security and defence; detection of physical and cyber security breaches; AI-augmented scenario training using virtual reality and simulations; identification and monitoring of threatened species; health services; and public transport services to mention a few.
The guardrails are coming too slowly
The federal government has recently consulted on how it can mitigate any potential risks of AI and support safe and responsible AI practices with the Safe and Responsible AI in Australia discussion paper released in June. The UNHCR responded to emphasise the need to focus on human rights in the areas of privacy, neurotechnology, automation bias, metaverse, chatbots, misinformation and disinformation, and algorithmic bias.
These consultative processes and the resultant legislative changes are necessary but time-consuming, meaning the speed of adoption of AI technologies can out-strip our ability to govern it effectively to ensure the development and adoption of trusted, secure, and responsible AI, and to protect against malicious use of the technology.
AI technologies such as machine learning, natural language processing, data labelling platforms, and predictive analytics can dramatically accelerate the rate of transformation. Implemented effectively, AI can generate benefits for the government sector in many ways, such as offering convenience, accessibility, automation, and efficiency, increasing innovation, personalising customer experiences, enhancing public and customer insight, reducing cybersecurity threats, and minimising costs.
In the absence of sufficient existing legislation, policy, and governance, it falls to government agency leaders and decision-makers to be proactive in their oversight, safety, privacy, transparency, and accountability in the adoption and use of AI technologies.
Steps to safe use of AI
Ethics must serve as the foundation of AI governance, and ethical frameworks should be established to guide the development, deployment, and use of AI technologies within government agencies. These frameworks should encompass principles such as transparency, accountability, fairness, and privacy. Leaders should voluntarily adopt and enforce The AI Ethics Framework, which consists of principles to ensure AI is safe, secure, and reliable.
A risk-based approach to how the technology is used in practice should be considered. While AI can revolutionise business transformation, it also introduces new risks. Leaders and teams must be vigilant in assessing AI-related risks, whether they pertain to data security, algorithmic biases, or regulatory compliance. By identifying potential pitfalls, anticipating unintended consequences, acknowledging these risks, and integrating mitigation strategies, potential disruptions and reputational damage can be minimised during transformation activities.
Data is the lifeblood of AI systems, and its management is critical to AI governance. Government departments must adhere to stringent data protection and privacy regulations, such as the Australian Privacy Principles (APPs), and establish robust data management practices, including data collection, storage, sharing, and encryption protocols.
AI algorithms should be made transparent and explainable, allowing for clear comprehension of how decisions are made. This will aid in improving public trust. Additionally, accountability mechanisms must be established to assign responsibility for AI-related outcomes, both positive and negative. Ensuring fairness in AI systems is essential to prevent biases that could adversely impact the public.
AI technologies are dynamic and ever-evolving. To remain informed Government departments must commit to continuous learning and adaption as part of their AI governance strategy. Regular training and upskilling of staff on AI concepts, benefits, risks, and ethical considerations are essential to maintain a well-informed workforce. Additionally, departments should actively engage with the AI research community, attend conferences, and participate in discussions to stay updated on the latest advancements and best practices. This proactive behaviour ensures that agencies remain at the forefront of AI governance and are equipped to make informed decisions during business transformation.
Harness the potential of AI safely
The integration of AI during business transformation within government departments holds immense promise for enhancing services, optimising processes and driving innovation. However, the rapid adoption of AI has outpaced the government’s ability to adequately regulate and govern the responsible and effective use of AI, nor guard against any malicious use of the technology.
In the interest of public trust and safety and to deliver the government’s policy agenda efficiently, agencies must proactively and voluntarily adopt an ethical, risk-based approach that prioritises data management and continuous learning. Through these measures, departments can harness the potential of AI to drive meaningful and sustainable business transformation in service of the government and the Australian people.
The magical (and contested) road ahead for government AI
- Follow the money: Australian Taxation Office sets the pace for government adoption of AI
- Developing public sector AI promises to be a tricky and exciting ride
- Navigating new ground with AI: Supporting health in the military
- Artificial intelligence will transform the public sector, but at what cost?
- Safety first: Why AI standards need to be ‘medical grade’
- Navigating AI adoption as it outpaces regulation
- Steep learning curve: The future of education with AI
- What happens when powerful technology falls into the wrong hands
- Navigating the nexus of AI, ML and automation – the deliberate evolution of government programs
- The oblique state of ‘explainable’ AI in government
- AI promises to revolutionise the public service, but only if it plays by our rules
- Navigating the risk in AI – select the right people over the right technology