Robodebt, algorithmic accountability and the law of averages

By and

January 19, 2023

galaxy-space-algorithm-robodebt
An algorithm ‘cop on the beat’ could prove an unexpected but valuable legacy. (Ulia Koltyrina/Adobe)

Douglas Adams’s Hitchhiker’s Guide to the Galaxy begins with the destruction of planet Earth. The demolition was the work of an interstellar ‘constructor fleet’ commanded by an alien race known as the Vogons. Despised as “one of the most unpleasant races in the galaxy but not actually evil” the Vogons were notorious for their bad temper, bureaucracy, officiousness and lack of empathy.

They were also the bureaucrats of the galaxy, a managerial class that managed planning and works. Earth had to be destroyed to make way for a hyperspace expressway. Any Earthling aggrieved by the Earth’s planned destruction could appeal the approval at a local planning department in the nearby star cluster of Alpha Centauri, more than four light-years from Earth.

The Guide notes that humans were allowed 50 years in which to lodge a complaint. But no complaint was ever made.

The Vogons would have appreciated and may even have been a little envious of the eponymous automated scheme to recover welfare debt, colloquially known as robodebt, created by the Australian government.

Operating in various configurations from 2015 to 2019, robodebt was the Turnbull and Morrison governments’ program to recover welfare overpayments and to defeat welfare fraud. Public revelations of the human consequences from wrongly calculated debts led Greens senator Rachel Siewert in 2020 to call for what is now the Royal Commission into the Robodebt Scheme.  The commission’s final is scheduled to be delivered to the governor-general by 18 April 2023.

As social services minister in 2015, Scott Morrison had been responsible for steering robodebt through the parliamentary budget process. It provided $204.9 million over five years to the Department of Human Services (DHS) to improve its capacity to detect and deter welfare fraud and non-compliance.

Together with improvements in the capacity of other agencies such as the Office of the Director of Public Prosecutions, the government expected this investment to produce net savings of $1.5 billion over four years.

The remainder of 2015 was taken up with a robodebt pilot program targeting debts of selected welfare recipients accrued between 2011 and 2013.

In September 2016, the system moved to full online implementation, with automated decision-making based on an algorithm that identified prospective cases of overpayment using averaged pay-as-you-go (PAYG) income data sourced from the Australian Tax Office (ATO).

This data was compared with beneficiary declared income for each benefit period, the discrepancy was reported as a debt where payments exceeded the average income assessed entitlement.

In this iteration, the system was known as the Online Compliance Intervention (OCI). In December 2016, then-minister for social services Christian Porter announced the system was capable of issuing debt notices at the rate of 20,000 per week.

A law of averages?

In the absence of other data, robodebt’s calculation was that benefit period income was derived from averaged annual ATO income data.

In early 2015, before robodebt came into being, the work of the royal commission showed, it was an open question as to whether an algorithm that identified debt liability on this basis was consistent with legal obligations under the commonwealth’s Social Security Act.

But it would take four years and many damaged lives before the program was halted. Checks and balances in the form of claims to the Administrative Appeals Tribunal and an investigation by the commonwealth ombud could not stop it.

In the end, a death blow to robodebt was delivered by the Federal Court of Australia in 2019 with the case of Amato v the Commonwealth. The court found the averaging process using ATO income data to calculate debts was indeed unlawful.

A bombshell revelation from the work of the royal commission is that the Department of Social Services (DSS) was in possession of legal advice as early as 2014, and that questions the lawfulness under the Social Security Act of relying on averaged PAYG income data for debt calculation has arisen.

DSS’ position in relation to this advice has been shown by the royal commission to have involved concealment, obfuscation and as yet unexplained communications failure.

By the time of its discontinuation, the commonwealth government claimed to have asserted debts of $1.7 billion against 453,000 Australians. Many of these debts were incorrectly calculated and distressing to recipients of debt notices. Some recipients are claimed to have committed suicide wholly or in part as a consequence of receiving debt notices.

A former compliance officer with Centrelink who was called as a witness by the royal commission, Colleen Taylor, gave evidence that because many debts were wrongly raised against welfare recipients, the system should be regarded as stealing from them.

The questions of how and why this Vogon-inspired mess happened and why it was allowed to continue for so long have exercised the royal commission in the early phase of its investigation.

Some of the explanation for why it happened is contextual. The election of the Abbott government in 2013 had been on the back of a zeitgeist in conservative politics around deficits, taxation, and welfare spending.

By 2015, Morrison was building electoral capital around the suppression of suspected welfare fraud. Radio shock jocks were feeding community outrage with selected case studies. Enter stage right Scott Morrison with his welfare ‘cop on the beat’.  In his testimony to the royal commission in December 2022, Morrison did not recoil from the tough stance taken.

Process failures and systems design mistakes plagued robodebt from its genesis as a concept in late 2014 to its abandonment years later. Many have been revealed in testimony to the royal commission.

The DSS practice of treating legal advice with which it was uncomfortable as ‘draft’ and hence not to be ‘finalized’ was used by senior DSS bureaucrats as cover for not communicating unwelcome advice to ministers. Records of important interactions between senior DSS and DHS executives, ministerial staff and ministers appear not to have been kept.

The design of the robodebt system itself was not human-centric, displayed poor usability and shifted the costs of compliance to benefit recipients.

The raising of a debt in favour of the commonwealth was automatic unless recipients could submit pay slips or other financial records showing that the averaged estimate of income from ATO PAYG data was wrong. And it was retrospectivity extended across the entire period of receiving welfare, potentially years in some cases. The documentation cost burden imposed on recipients was huge. Who keeps every payslip that they have ever received?

Steeped in process automation, anonymity and a burden of proof that was onerous to users, robodebt tilted the scales of justice in the direction of presumed guilt. It also displayed a total lack of transparency in how debt liability was arrived at. Welfare recipients would be advised in a form letter of income discrepancies. There was no helpline and no explanation of how the discrepancy had been calculated. This embedded unfairness was amplified by instructions given to compliance officers.

Taylor told the royal commission that compliance officers were instructed not to waste time investigating evidence held on file, such as a scanned copy of income statements previously supplied by recipients.

Also in evidence, Taylor described her efforts in 2017 to bring defects in robodebt to the attention of senior management, including the head of the Department of Human Services (DHS), Kathryn Campbell AO. These efforts were unsuccessful, with some DHS documents suggesting that Taylor was ‘overly sympathetic’ to welfare recipients.

In June 2020, with robodebt fatally wounded by the Federal Court and in the throes of abandonment by the Morrison government, the Greens and then Labor called for a royal commission.

Whatever the royal commission concludes about the behaviour of ministers and bureaucrats behind the robodebt saga, the commission’s legacy will be measured in terms of the effectiveness and durability of reforms recommended. This is what will remain after the curtain is called on the livestream theatre of KCs, squirming ministers, and bureaucrats. Royal commissions mostly have a chequered history when it comes to durable, meaningful reform.

With robodebt, the failings of ministers and senior bureaucrats, blame shifting, and the corruption of the public service by the executive power are all writ large in the evidence gathered by the commission. But these areas are notoriously difficult when it comes to durable reform. The commission may need to look elsewhere if it wants to leave a lasting legacy.

Techno-solutionism and algorithms

Robodebt is an Australian case study in techno-solutionism based on algorithms, data sets and automated decision-making.

While it is being treated by the media as unique, algorithmic abuse of vulnerable people is a global phenomenon. Across a variety of application contexts, poorly designed algorithms and automated decision-making have been shown to expose individuals to the risk of violation of their rights.

Facial-recognition systems have attracted the most attention. A study of the London Metropolitan Police’s suspect-recognition system conducted by the University of Essex in 2019 described instances of false positives leading to the wrongful identification of innocent individuals as suspects in crowds.

In June 2020, the New York City Chapter of the Association of Computing Machinery (ACM) urged a suspension of private and government use of facial-recognition technology because of “clear bias based on ethnic, racial, gender and other human characteristics.” Calo and Citron (2020) recount another case study in Arkansas’ Department of Human Services where:

“As a result of the automated system’s dysfunction, severely disabled Medicaid recipients were left alone without access to food, toilet, and medicine for hours on end. Nearly half of Arkansas Medicaid recipients were negatively affected. Obtaining relief from the software-based outcome was all but impossible.”

Social media platforms are also taking heat on algorithms. In the washup from the January 6, 2021 riots in Washington, Facebook’s news feed algorithm has attracted particular attention for fanning hyper-partisanship, assisting with the propagation of conspiracy theories and “incentivizing politicians to take more divisive stands

A new front in the quest to make companies and governments more accountable for their use of algorithms has opened up with the rise of artificial intelligence (AI) and machine learning. These developments are transforming the design of automated decision-making systems.

Machine learning is a field of inquiry devoted to understanding and building methods that ‘learn’, that is, methods that leverage data to improve performance on some set of tasks. Machine-learning algorithms build a model based on sample data, known as training data, to make predictions or decisions without being explicitly programmed to do so.

Machine-learning systems have become ubiquitous in the past decade, being most commonly embedded in the architecture of social media applications. Increasingly, machine learning is at the heart of automated decision-making that affects almost every facet of daily life including, financial, medical and legal.

Might robodebt have been re-engineered as a machine-learning system? As robodebt took on water in 2017 from the increasing visibility of wrongly calculated debts and customer-usability issues, evidence given to the royal commission suggests that moving the system to a machine-learning architecture was under active consideration by DHS (exhibit 1688 in the royal commission).

With a growing pile of case studies and calls for action, legislators around the world have been working to address the problems of algorithmic accountability and automated decision-making.

In the US in August 2022, senator Ron Wyden introduced an Algorithmic Accountability Act. The case for the bill is described in an explanatory memorandum in the following terms:

When algorithms determine who goes to college, who gets healthcare, who gets a home, and even who goes to prison, algorithmic discrimination must be treated as the highly significant issue that it is.

These large and impactful decisions, which have become increasingly void of human input, are forming the foundation of American society that generations to come will build upon. And yet, they are subject to a wide range of flaws from programming bias to faulty datasets that can reinforce broader societal discrimination, particularly against women and people of colour. It is long past time the US Congress acts to hold companies and software developers accountable for their discrimination by automation.

In Europe, recent initiatives have targeted AI and robotics. In 2019, the European Parliament by an overwhelming vote adopted a comprehensive European industrial policy on AI and robotics. This included recommendations dealing with ethics and governance.

The EU’s Digital Services Act (2022), which has a focus on services that connect consumers with goods, services and content, includes provisions that relate to algorithmic transparency.  An example of an action that specifically addresses government use of algorithms is the French law for a digital republic enacted in 2016. This law extended the principle of information to algorithmic processing:

“Any person who is the subject of an individual administrative decision taken on the basis of an algorithm must be informed of the fact and may demand access to the algorithm’s main operational rules (its contribution, data used, etc.).”

Technology vendors themselves, have not been entirely aloof from growing public disquiet over the new frontier of automated decision-making with AI and machine learning. Amazon, Microsoft and Google each offer design guidance and software services and tools aimed at promoting accountability and transparency. But without a regulatory framework, abuse of vulnerable people is set to continue.

In its Letters Patent, the royal commission is provided with the power to make any recommendations it considers appropriate.  Addressing government use of algorithms, AI and automated decision-making systems with vulnerable people is clearly within its reporting mandate.

However, to arrive at an understanding of the need for such recommendations, the royal commission will need to look beyond the predictable discourse on policy, organisational and human failure that has characterised its work so far.

This work is important, but the techno-solutionism behind robodebt also needs to be investigated.

Recommendations for reform are needed that are aimed at combating bias, transparency and usability in the use of technologies that are key enablers of automated decision-making. If the royal commission succeeds at this, an important step will have been taken towards avoiding the recurrence of robodebt.

An algorithm ‘cop on the beat’ could prove an unexpected but valuable legacy.

Click here for The Mandarin‘s full coverage of the robodebt royal commission.


:

Data isn’t neutral and neither are decision algorithms

About the authors

Any feedback or news tips? Here’s where to contact the relevant team.

The Mandarin Premium

Try Mandarin Premium for $4 a week.

Access all the in-depth briefings. New subscribers only.

Get Premium Today