Text size: A A A

How we’re losing the arms race against deepfake technology

If a lie can get halfway around the world before the truth puts its boots on, consider this scenario.

Late one evening, a video carrying the POTUS seal is released from the Oval Office. The US president is behind the Resolute Desk, shooting a steely gaze down the barrel of the camera: “My fellow Americans”. The president explains that provocations in the South China Sea, troop build-ups on the Indian border and other perceived acts of aggression mean China can no longer be trusted as a nation-state. The president says that after consulting its great allies, including Australia and the UK, the US has no choice but to make a powerful pre-emptive military strike on China. The president concludes: “God bless our troops”.

The video isn’t real, of course. It’s a ‘deepfake’ that’s quickly and cheaply made by a rogue state or private organisation. To add to the sense of “authenticity”, the video is released on a hacked official White House social media channel.

Now consider what happens next. In the uneasy minutes that follow, how would Chinese authorities react? Are we sure verification would precede retaliation?

Deepfakes, or synthetic media, are still and moving images manipulated by sophisticated simulation software and artificial intelligence (AI). Using a machine learning system called a deep neural network, the system examines the facial movements of one person and synthesises images of someone else – perhaps an actor filmed in front of a green screen – making similar movements.

Synthesised videos could be used, for instance, to place a religious leader in a compromising position or put words in the mouth of a US president. And with synthetic media technology improving exponentially, the results are becoming more realistic – and frightening. In March, the FBI’s Cyber Division released a private industry notification that warned “malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months”.

A couple of famous examples – featuring images of Tom Cruise and Barack Obama – show the potential of deepfakes to create an alternative narrative. In 2020, Indian politician Manoj Tiwari used deepfake technology to fabricate a video of him speaking in a Hindi dialect to speak directly to more voters.

“The problem is that it’s very hard to counter this,” says leading AI expert Toby Walsh. “You will get to a point where unless you saw it literally with your own eyes, you can’t be sure it wasn’t faked. It’s so convincing, you won’t be able to tell it apart from the real thing.”

Walsh is a laureate fellow and scientia professor of AI at the Department of Computer Science and Engineering at UNSW and is exploring the concept of ‘trustworthy AI’. He believes the emergence of deepfake technology is a danger for every society and fears that it’s likely to have a significant impact on politics.

“An election could easily be turned by some rival releasing a [deepfake] video at the last moment,” he says. “There won’t be time to demonstrate that it wasn’t true of someone in a compromising situation or saying something that’s compromising. It’s enough to swing the outcome of the election.”

Walsh says the existence of synthetic image technology might even allow politicians to claim something is fake when it’s real. “It used to be if you were caught on camera saying something unpleasant or unacceptable, you’d have to own it,” he says. “But now you don’t. For example, [former US president Donald] Trump has denied something that as far as we know was something he said – something disreputable about women. But he can just dismiss it as a ‘deepfake’ and get away with it.”

The deepfake ‘arms race’

As synthetic media technology continues to improve, detection becomes even more difficult. Siwei Lyu, Ming-Ching Chang and Yuezun Li from the State University of New York found an early way to distinguish real videos from deepfakes by examining eye-blinking patterns, but understood it wouldn’t be a permanent solution as technology progressed. Meanwhile, news organisations such as The Washington Post and Duke University’s Reporters’ Lab are developing techniques to help fact-checkers tag manipulated videos to alert social media platforms or search engines.

Facebook (now Meta) says it’s found a method to detect and attribute deepfakes by relying on “reverse engineering from a single AI-generated image to the generative model used to produce it”. The social media giant claims the technique “will facilitate deepfake detection and tracing in real-world settings, where the deepfake image itself is often the only information detectors have to work with”.

Walsh believes it’s unlikely the private or public sector can develop tools that will foil anyone looking to create a deception. “It’s a continual arms race,” he says. “They’ll always be able to synthesise deepfakes better than the detectors.

“There’s no way you can insulate yourself against it unless you can get some certificate of provenance – proving a video came from, say, the ABC or BBC. Even then, you have to be careful that people can’t make fake watermarks.”

Walsh believes there are only limited ways the global community can begin to deal with deepfakes. The first is through better education: “You have to teach people to be sceptical of what they see and hear, and [only] go to reputable sources that will have done the groundwork for you so you can trust [what you see].”

A second is through self-regulation – an international standard requiring deepfake originators to mark their work as ‘synthetic’ or ‘fabricated’. A third is to put the responsibility back on platforms – particularly Facebook, YouTube, TikTok and Twitter – to ensure the material they publish is “real”.

A fourth might be for governments to ban the technology entirely. “We might need to have that sort of conversation,” says Walsh. “There was a time when you couldn’t export strong cryptographic tools from the US because they were considered too much of a threat to national security to get into the wrong hands.”

Like facial recognition technology that also relies on AI, synthetic image manipulation offers “so many downsides and so few upsides”, says Walsh. “This technology has worrying implications, and I don’t think we’re prepared for the challenges it’s going to pose.”

The struggle to maintain Australia’s cybersecurity

The Australian Cyber Security Centre received more than 67,500 cyberattack reports during 2020-21. That's one every eight minutes.
Not only are ransomware attacks more sophisticated and frequent, they’re becoming more targeted and personal.
As synthetic media technology — deepfakes — continues to improve, detection becomes even more difficult.
With the digital environment a new front for espionage, foreign jurisdictions have been busy trying to kick down Australia's digital doors.
During the pandemic, cybercriminals swooped on unsuspecting organisations to probe for IT vulnerabilities, especially all government levels.
QAnon adherents, anti-government militias, sovereign citizens and jihadist and white supremacist groups have co-opted emerging communication technologies in a way that has security agencies such as the Australian Security Intelligence Organisation concerned. Non-state actors are using encrypted messaging applications that provide a cloak of secrecy to their activities that include the dissemination of propaganda, recruitment […]
Professor Chris Leckie and his colleagues rarely find themselves struggling for inspiration when it comes to research projects. “I often joke that cybersecurity is a great area to do research in because the Russian mafia is coming up with your next research problem,” he says. “It’s an area where the threats are continuously growing and […]
We've gone remote very, very quickly. Digital security takes a long time to implement securely but we've been forced to do it quickly.
The COVID-19 pandemic accelerated demand for cloud technologies as the private and public sectors rushed to update the delivery of urgent services and ensure continuity. A Gartner report suggests cloud spending will exceed $10 billion in Australia this year. But the government’s cloud security guidelines need to go further in overcoming barriers and drive further […]
Government adoption of cloud services has historically moved slowly. That’s changing as agencies look to adopt more digital-first agendas.