Text size: A A A

Why the right messaging can help bureaucracies counter disinformation

Australian regulators and the public sector are desperate to meet the challenge of stemming the dissemination of misinformation and disinformation via social media platforms.

However, the challenge for the public service is to put more effort into building trust in reliable sources of information rather than fighting misinformation and disinformation with impersonal facts.

Misinformation refers to false, misleading or deceptive information that causes harm. In an online context, this can include anything from inaccurate or misleading social media posts or website content to doctored images and videos, made-up news articles or scam advertisements.

Not all misinformation is deliberately spread; sometimes users share misinformation without realising it. People just get misinformed.

However, when misinformation is spread deliberately it becomes disinformation. The difference is that disinformation is strategic. Disinformation may be intended to cause confusion, undermine trust in governments or institutions, or lure people into financial scams.

One example of disinformation in Australia was the fake news on COVID-19 that influenced people’s attitudes and actions, not only when it came to their health decisions (such as whether to get a vaccine) but also their political choices.

Another example came during the bushfires in 2020, when two pieces of disinformation claimed that an “arson emergency” rather than climate change was behind the fires and that “greenies” were preventing firefighters from reducing fuel loads in the Australian bush.

The federal government has been trying to address the issue.

In 2019, the Australian Competition and Consumer Commission released a 623-page report that examined disinformation, or “false or inaccurate information that is deliberately created and spread to harm a person, social group, organisation or country”. The ACCC’s Digital Platforms Inquiry report found that digital platforms had “considerable influence” in shaping the news.

“However, the atomisation of media content and the risk of misinformation and disinformation being spread on digital platforms make it difficult for consumers to evaluate the veracity, trustworthiness and quality of the news content they receive online,” the ACCC said. “This may have the effect of undermining democratic processes, as the ability of consumers to recognise high-quality news is essential for a well-functioning democracy.”

“To the degree that online consumption makes it harder for public interest journalism to reach audiences, but easier for disinformation and malinformation to do so, this is clearly a significant public policy concern.”

The ACCC analysed Australian and overseas studies. It maintained that addressing these issues is too important to be left at the sole discretion of digital platforms alone. It recommended that a regulator, such as the Australian Communications and Media Authority (ACMA), monitor and evaluate the effectiveness of voluntary initiatives that digital platforms are already implementing.

In February 2021, the Australian Code of Practice on Disinformation and Misinformation – the most expansive code to reduce harmful false digital content – was released. The code was last revised in December 2022 and currently has eight signatories: Apple, Adobe, Google, Microsoft, Meta, Redbubble, TikTok and Twitter.

The voluntary code takes an outcomes-based regulatory approach that provides signatories with the flexibility to develop their own measures to counter disinformation and misinformation in a way that best reflects their service and business models. This approach also enables platforms to implement measures that are proportionate to the risk of harm. Platforms have committed to a range of measures under the code, including media and digital literacy initiatives to help users better identify and avoid mis- and disinformation.

In March 2022, ACMA released a report on the inadequacy of digital platforms’ disinformation and news quality controls. It found that 82% of Australians reported their experience of misinformation about COVID-19 for the 18 months prior. They said that false information is most likely encountered on Facebook and Twitter.

The ACMA report traced the typical spread of misinformation to “highly emotive and engaging posts within small online conspiracy groups”. These posts, or the information in them, it said, was then amplified through influencers, public figures (including politicians and celebrities) and through media coverage.

In January, the federal government announced that the ACMA will be given new powers to hold digital platforms to account and improve efforts to combat harmful misinformation and disinformation in Australia.

ACMA says these powers will allow it to compel information from platforms, to register industry codes and make mandatory standards should industry measures fail or prove inadequate. Importantly, these powers will be designed in a way so as not to impinge on users’ freedom of expression, with content moderation decisions remaining the responsibility of digital platforms.

Consultation on the draft bill is expected within the first half of the year and the ACMA encourages stakeholders and the public to offer their perspectives on these new powers.

Unforeseen harms

“Online services have brought huge benefits to our daily lives but they have also opened the door to some unforeseen harms, including the spread of misinformation and harmful content,” said eSafety commissioner Julie Inman Grant.

“The algorithms which online platforms use to recommend content, for example, can end up promoting and effectively normalising unhealthy eating habits, hate, racism and violence.”

She said these could have enormous consequences for society.

“They may lead users down rabbit holes of polarising or seriously harmful content, stifling balanced debate and public discourse.  These attitudes and beliefs can spill over into harms in the real world, sometimes with very tragic consequences.”

The eSafety commissioner’s team has also focused on the negative feedback loop: the more potentially harmful content you engage with online, the more of this material the algorithm may serve up, contributing to an increasingly isolated and damaging online experience. Sustained exposure to inappropriate or harmful content poses a particularly heightened risk to children who may not have developed the critical reasoning skills to understand and filter what they are seeing.

eSafety’s Mind the Gap research found almost two-thirds of 14- to 17-year-olds had been exposed to negative content online including hate messages, discussion of drug use, gory or violent images and suicide material.

Recent media coverage has highlighted how social media recommender systems can affect the mental health and wellbeing of children and young people, including the tragic case of 14-year-old London schoolgirl Molly Russell, who died by suicide in 2017 after viewing online content about suicide and self-harm on platforms like Instagram and Pinterest.

While this may be an unintended consequence of trying to keep people engaged online, technology companies have a responsibility — and the capability — to address this issue and protect users from harm.

“Technology companies have a role to play in addressing such harms, together with governments, educators and the broader community,” Inman Grant said.

Dr Darrin Durant, a senior lecturer in science and technology studies at the University of Melbourne, said that regardless of the difference between misinformation and disinformation, the result is always the same when public servants or scientists try providing sceptics with the right information: there is a lack of trust about the information.

Durant said it’s even more difficult persuading those believing disinformation as the disinformation is strategic.

“Without the trust element, the provision of information is probably going to go nowhere,” he said.

“Therefore, the question for me when I think of the misinformation/disinformation/climate change is how to move beyond strictly providing information.”

He said there needs to be a conscious effort by bureaucrats to think through what is happening to the audience and to provide information that is trustworthy.

“Even at its biggest scale, like a big government department putting out a large public relations document of some kind, and this is a state level society-wide blurb of information, as long as those who are putting that particular document together with whatever information are informed by the fact that people are making a trust judgement,” he said.

“We know it’s a huge challenge for a bureaucracy and they will never get close to addressing it if they don’t even try.”

He said bureaucrats need to create trust by localising the information instead of using abstract terms. They need to cite specific scientific studies or findings from research institutions.

“We know that scientific knowledge travels and it travels into a broader audience. So if you are to talking about floods in Lismore, rather than say ‘climate science has shown’, it has to be something like ‘the CSIRO has shown’,” he said.

The same would apply to quoting specific university studies.

“When the person is receiving that information, they are trying to make a trust judgement and the art of making that trust judgement is are there people behind the research and do they have the hallmarks of trust and worthiness?,” he said.

This, he said, is better than citing abstract statistics that people struggle to evaluate. That will not fix the trust deficit. And that, he said, is the challenge for public servants.

“That’s the challenge because they are accustomed to trying to make abstract statistics and abstract categories that don’t have any people behind them, or don’t have any research institutes behind them. They don’t have any humanness behind them. It’s a challenge for the bureaucracy to move from being impersonal to being personal.”

Disinformation, deep fakes and the deception economy: cyber’s new reality

data governance
High-profile hacks have caused a realisation within public and private organisations that they need better data practices to reduce their vulnerability.
cyber
In the wake of increasing cybercrime, awareness and security are keeping pace. But growing tensions around the world are making an impact.
data science
There is a significant benefit all departments of government would enjoy from the establishment of inter-agency linked datasets in Australia.
radicalisation
Misinformation and disinformation have been weaponised by extremist fringe groups and state-based actors alike.
quiet quitting
Human behaviour in the modern workplace - think the quiet quitting phenomenon - is having an effect on cybersecurity risk.
cybersecurity
The protection of data requires all hands on deck, but a variety of cybersecurity organisations can lead to confusion and possibly conflict.
digital identity
We need the convenience and fluidity of digital engagement, but if our ID credentials aren’t robust and secure we face a serious problem.
disinformation
Privacy activists are pushing for better big tech regulations, while the industry continues viewing human behaviour data as a business asset.
biometrics
The federal government is working on a "portable, safe and secure" national digital ID program. But what will it take to truly defeat fraud?
myGov
The federal government seems to be serious about improving digital services, but can it deliver?
datasets
The integration of data can present a wealth of opportunities. But maximising the value of datasets requires investment in analytic skills.
disinformation
The challenge for the public service is to put more effort into increasing trust in reliable sources of information rather than fighting misinformation and disinformation with impersonal facts.