Text size: A A A

Safety first: Why AI standards need to be ‘medical grade’

Before ChatGPT hit mass public adoption, the big question about AI used to be, “Will robots take my job?” Now more people are seriously asking, “How to make AI work (for you) at work”.

In the healthcare sector, artificially intelligent systems are already being widely used to rapidly analyse large data sets and make recommendations for a huge range of tasks from matching doctors to patients and helping doctors record notes for medical records, to detecting emerging health risks (melanomas and coronary artery disease), predicting patient prognosis (such as risk of breast cancers), selecting the most suitable medication (such as antidepressants) and discovering new pharmaceutical drugs.

But some early adopters of AI say we should pause giant AI experiments in any sector. And local experts say government organisations must sort the rules of engagement first before going all-in on AI and automated decision-making, especially in areas such as health and welfare.

Policy, procedure and protocol

Healthcare organisations need to collect evidence AI tools are “medical grade” before using them on patients, asserted Karin Verspoor, David Hansen and Enrico Coiera in an article published by CSIRO in June.

“Many claims made by the developers of medical AI may lack appropriate scientific rigour and evaluations of AI tools may suffer from a high risk of bias,” they wrote. “Clinicians should be given training on how to critically assess AI applications to understand their readiness for routine care.”

Mark Pesce, inventor (3D technology for the web, among other things), futurist and author, is concerned most users of AI tools don’t know how to use them safely and securely.

Public sector organisations that collect and hold vast amounts of personal data about citizens must develop robust policies, procedures and protocols before handing data over to an AI system.

“Policy must come first: when is it OK to use AI and why? Then procedure: what’s the best approach for each use that preserves privacy and security? Finally, protocol: how do you remove risk — where is the human oversight — and what happens when it goes wrong? Who is responsible for making wrongs right?”

Pesce’s call for humans to have ultimate accountability was recently echoed by the NSW ombud Paul Miller in his submission to the federal Department of Industry, Science and Resources’ “Safe and Responsible AI in Australia” discussion paper, in which he said “there is no such as responsible AI”.

Any organisation collecting personal information must prioritise privacy before allowing data to be processed, Pesce says.

“AI systems are incredibly hungry for data and whenever you feed the machine there’s a risk of people’s privacy being violated if that data hasn’t been de-identified – all personably identifiable, sensitive information needs to be removed from people’s health records. If that data goes into the cloud, you’re making it more vulnerable because anything in the cloud can be leaked.”

AI innovation doesn’t have to be outsourced

Ricky Sutton, founder of AI media technology company Oovvuu, told The Mandarin he’s concerned outsourcing to large tech companies creates huge data privacy risks.

“As soon as your information is stored in a cloud service managed by a big tech company, you relinquish control of your data to a commercial company with a vested interest in monetising it,” he says.

“Government organisations should create and protect their own data services and infrastructure to keep our nation’s data safe and private, not outsource.”

He also warned that because large tech vendors are dominating conversations about AI, the development of AI skills within the public sector is being stifled.

“There is no question the extra computing power can help drive incredible breakthroughs in health, for example, but there seems to be a consensus it’s the space of tech companies,” says Sutton.

“We shouldn’t be outsourcing innovation to Amazon, Google or Meta. It’s incumbent on all of us — governments, policymakers, lawmakers and organisations — to learn how to use the superpowers of AI to do more with more.

“Technology is only worth making if it improves the human experience,” he adds. “You can’t just pass the job to a technology company because they’re not qualified to solve health issues, they find tech solutions to tech problems.

“The people who should be driving innovation in health are those with the expertise and knowledge in health — and they need to have better oversight of how technology is developed and used to make the right decisions for better outcomes.”

“Learning how to best use the tech will take time but it is a choice and it is achievable. What we do know for sure is that technologists are not going to down tools on AI to spend time learning more about health outcomes or patient welfare.”

Case study: Australian Epilepsy Project

Mangor Pedersen, AI lead on the Australian Epilepsy Project, which aims to use AI to reveal better treatment pathways for patients, flagged the risk of AI biases regarding gender, ethnicity and socio-economic status in The Conversation.

“My main concern is that AI is moving fast and regulatory oversight is minimal,” he wrote. “These issues are why we recently established an ethical framework for using AI as a clinical support tool. This framework intends to ensure our AI technologies are open, safe and trustworthy, while fostering inclusivity and fairness in clinical care.”

The Australian Epilepsy Project team proposes AI ethics goals across five key areas:

  1. Transparency – AI-based clinical decisions and research should be validated, explainable and reproducible. Use AI reporting checklists and registered reports to enhance transparency and reproducibility.
  2. Justice and fairness – Democratise clinical access; monitor and remove unacceptable demographic-related biases in AI models; and improve diversity in research studies (women, people of colour and older people tend to be under-represented in research studies).
  3. Non-maleficence – Personal information in data sets used for AI and modelling must be kept secure and data sharing should operate within The Five Safes Framework, which includes preventing exposure of people’s identities.
  4. Responsibility – Clinical expertise plays in integral role in AI model development and use, guiding human-in-the-loop uses of AI in medicine where IA is deployed to support clinician decision-making, rather than replace or emulate it.
  5. Sustainability – As AI models use large amounts of energy, AI checkpoints can help detect errors early in training models, potentially saving time and energy. Use of renewable energy sources and smaller AI models with lower energy demands can help reduce carbon footprint.

“We argue for adopting a proactive rather than reactive stance to AI safety,” Pedersen concluded. “It will establish an ethical framework for using AI in clinical care and other fields, yielding interpretable, secure and unbiased AI. Consequently, our confidence will grow that this powerful technology benefits society while safeguarding it from harm.”

Were the Luddites onto something?

The Luddite movement which emerged during the Industrial Revolution of the 1700s is widely misrepresented as anti-technology. It was actually more focused on persuading bosses to invest in technologies that would help skilled workers do high-quality work — and pay them fairly.

At the time, many bosses were keener to replace skilled workers with machines that could churn out average-quality products and be operated by less-skilled and lower-paid labour. Sounds familiar?

Sutton says: “The narrative about AI shouldn’t be, ‘How can we use this technology to streamline our operations and reduce staff?’ It should be focused on, ‘How are we going to build skills within organisations with smarter technology to build the future we need?”


Recent action to curb AI risks

  • October 4, 2022 – the White House published The Blueprint for an AI Bill of Rights which includes five principles for the design and use of automated systems: safe and effective systems; algorithmic discrimination protections; data privacy; notice and explanation; and human alternatives, consideration and fallback.
  • March 22, 2023 – An open letter by AI experts published by the Future Life Institute to pause giant AI experiments stated: “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4” and warning that “AI systems with human-competitive intelligence pose profound risks to society and humanity”.
  • March 29, 2023 – Eliezer Yudkowsky, lead researcher at the Machine Intelligence Research Institute, published a think piece in Time Magazine: Pausing AI Developments Isn’t Enough. We Need to Shut it All Down, as he believes: “without precision and preparation, the most likely outcome is AI that does not do what we want and does not care for us nor for sentient life in general”.
  • March 29, 2023UK government announced its AI Regulation: “AI continues developing rapidly, questions have been raised about the future risks it could pose to people’s privacy, their human rights or their safety”.
  • April 27, 2023 – Washington State enacted the My Health My Data Act, one of the broadest reaching privacy laws in the US, which bans organisations from using geofences (automated location tracking) at in-person health care services “to identify and track people seeking health care services; collect consumer health data from consumers; or send notifications, messages, or advertisements to consumers related to their consumer health data or health care services”.
  • June 14, 2023European parliament passed its Artificial Intelligence Act: “AI can create many benefits, such as better healthcare … parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes”.

The magical (and contested) road ahead for government AI

The ATO has been pioneering machine learning for years. Playing the long game with cohesive data architecture and ethics is now paying off.
The APS needs to strike a delicate balance between appropriate regulation, relevant governance and innovation — and commit to rapid upskilling of the workforce.
ptsd ai
There have been exciting advances in artificial intelligence and machine learning supporting mental and physical health of those in the armed forces.
Before an AI makeover, there needs to be more oversight, transparency and a strong dose of AI literacy.
Before going all-in on artificial intelligence, experts believe government organisations must set the rules of engagement.
AI's rapid adoption has outpaced the government’s ability to adequately regulate and govern its responsible and effective use, or guard against malicious use of the technology.
Education experts are optimistic about a future where artificial intelligence is used as a tutor, not a short cut.
On the eve of a global summit on artificial intelligence, world leaders are expressing concerns about what bad actors could do with advanced technology.
Balancing the cautious pace of public sector decision-making with the rapid evolution of technology demands a paradigm shift.
While AI may be the word on everyone's lips right now, global research shows that the technology is not that well understood. If it is going to be used effectively, that has to change.
AI may appear intimidatingly advanced but it is simply part of the 'computing revolution', and viewing it as such enables us to work with it more effectively.
ai risk
What is needed? Policy, regulatory and legislative mechanisms for developing artificial intelligence and machine learning. Easy to say but challenging to do.