‘Rapid-AI’ wrapped up in privacy and security risks, study warns

By Melissa Coade

February 28, 2024

data breach-AI
The race to adopt AI solutions in the corporate world is fraught with moral and technical issues. (Andrey Popov/Adobe)

Cyber security experts have warned that the race to adopt artificial intelligence (AI) solutions in the corporate world is “fraught with moral and technical issues”.

A paper by researchers from the University of the Sunshine Coast (USC) has described the use of tools like ChatGPT, Bard or Google’s Gemini as a business blindspot.

Generative AI is used to make content that appears to be created by humans by transforming large amounts of real-world data.

Paper co-author Dr Declan Humphreys said Generative AI tools could leave companies exposed to deliberate or accidental events. These could include events such as mass data breaches exposing third-party information, or business failures based on manipulated or “poisoned” AI modelling.

“The research shows it’s not just tech firms rushing to integrate the AI into their everyday work — there are call centres, supply chain operators, investment funds, companies in sales, new product development and human resource management,” Humphreys said.

“While there is a lot of talk around the threat of AI for jobs, or the risk of bias, few companies are considering the cyber security risks.”

In response to the concern, Humphreys and fellow computer science and artificial intelligence experts USC developed a checklist to give businesses five ways to ethically implement these kinds of solutions.

For organisations looking to implement AI systems, the researchers said privacy and security should be top priorities in addition to:

  1. Secure and ethical AI model design
  2. Trusted and fair data-collection process
  3. Secure data storage
  4. Ethical AI model retraining and maintenance
  5. Upskilling, training and managing staff

The researchers stressed that companies that created their own artificial intelligence models or used third-party providers were equally susceptible to hacking.

“Hacking could involve accessing user data, which is put into the models, or even changing how the model responds to questions or the answers it gives,” Dr Humphreys said.

“This could mean data leaks, or otherwise negatively affect business decisions.”

Humphreys noted that organisations that were moving to adopt artificial intelligence solutions should think carefully about how they adapted their governance frameworks. But government regulatory frameworks to protect workers, sensitive information and the public should also rise to meet the challenges.

The fact that legislation had not caught up with the pace of data protection and generative AI issues only exacerbated the problem, he added.

“The rapid adoption of generative AI seems to be moving faster than the industry’s understanding of the technology and its inherent ethical and cyber security risks,” Humphreys said.

“A major risk is its adoption by workers without guidance or understanding of how various generative AI tools are produced or managed, or of the risks they pose.”

‘AI hype as a cyber security risk: the moral responsibility of implementing generative AI in business’ was published in the Springer Nature journal AI and Ethics on Monday.

The USC research, also co-authored by Dr Dennis Desmond, Dr Abigail Koay and Dr Erica Mealy, was supported by Open Access funding enabled and organised by CAUL and its Member Institutions.

“This study recommends how organisations can ethically implement AI solutions by taking into consideration the cyber security risks,” Humphreys said.


READ MORE:

How AI deepfakes threaten elections across the world in 2024

About the author

Any feedback or news tips? Here’s where to contact the relevant team.

The Mandarin Premium

Try Mandarin Premium for $4 a week.

Access all the in-depth briefings. New subscribers only.

Get Premium Today