Predictive algorithms can give better policy answers, if you ask the right questions

By Stephen Easton

March 29, 2017

Can predictive algorithms change the world and usher in a new age of enlightenment, where human biases are eliminated from decision-making that depends on weighing up risks and likelihoods?

That’s more or less one dream for machine learning and artificial intelligence, fields that have matured only recently in the era of big data but have already progressed to the point where in some cases, computers make better predictions than people.

Combining insights into cognitive biases from behavioural economics with cutting-edge computer science offers the potential to improve social and economic outcomes from policy, according to Michael Hiscox, who leads the Behavioural Economics Team of the Australian Government (BETA).

Cognitive biases are like shortcuts for the brain that simplify hard decisions but often lead to inaccurate guesses about the future, said Hiscox. The Harvard University professor hosted three other leading thinkers from the storied institution at a public administration seminar in Canberra yesterday.

Most decisions we make involve some kind of prediction, but some involve only a small number of predictions about future likelihoods and very little human judgement. The team at BETA think there’s loads of administrative data sitting in government that could underpin a new generation of predictive decision-support tools, Hiscox said.

According to one audience poll, Canberra public servants are fairly optimistic that their departments are open to new and innovative ideas and another (pictured) showed the audience felt health was the area where predictive tools could help most. But the most illuminating example concerned bail hearings in the United States.

Bail or jail?

The keynote speaker, luminary economics professor Sendhil Mullainathan, co-authored a paper that suggests an algorithm could help reduce jail populations, reduce crime, and improve racial equity in the justice system, all by significant margins. It suggests bail-or-jail decisions are ripe for this kind of decision-support because they are made on very narrow lines:

“Judges are by law supposed to base their bail decision solely on a prediction: what will the defendant do if released? Will he flee? Or commit another crime? Other factors, such as whether the defendant is guilty, do not enter this decision. The reliance of the bail decision on a prediction makes this an ideal application that plays to the strengths of machine learning.”

“In the US, there are about 12 million arrests every year,” the well-known behavioural economist and author told the ACT chapter of the Institute of Public Administration.

“That’s actually a jaw-dropping number. Kind of abominable, really, if you think about it.”

Sendhil Mullainathan

Jail refers specifically to “a purgatory of sorts” in the US system where those awaiting trial can be remanded, he explained. About three-quarters of a million people are currently held in US jails. The average stay is a few months but much longer stretches of a year or more are common.

Mullainathan quoted some eye-opening results from comparing the performance of a predictive algorithm with the US criminal justice system, which can also be found in the publicly available working paper:

“Many of the defendants flagged by the algorithm as high risk are treated by the judge as if they were low risk. For example, while the algorithm predicts that the riskiest 1% of defendants will have a 62.6% crime risk, judges release 48.5% of them.

“Of course the algorithm’s predictions could be wrong: judges, for example, could be astutely releasing the subset of this group they know to be low risk.

“But the data show that those defendants the algorithm predicted to be risky do in fact commit many crimes: they fail to appear for their court appearances at a 56.3% rate, go on to commit other new crimes at a 62.7% rate, and even commit the most serious crimes (murder, rape and robbery) at a 4.8% rate.”

“Algorithms are not a panacea; I don’t mean to make them out to be,” Mullainathan said later in the speech. Rather, they should be seen as “decision aids, not substitutes” and the idea of completely automating important official decisions anytime soon is “madness” in his view.

“But we have tools that can help people struggling with complex decisions … have predictions at hand that they can then modify as they see fit,” he added.

Public servants should not be overly concerned about building new algorithms, either. The technology is widely available, although the claims of some commercial vendors for off-the-shelf products are sometimes rather exaggerated.

“This is not a technical problem,” said Mullainathan. “I think five years ago this was a technical problem… The problem is actually in deciding the question.”

Asking the right questions

Predictive algorithms will not help solve policy problems where effective interventions still need to be found. In the bail-or-jail example, Mullainathan explained, incarceration is nearly 100% effective in preventing criminal defendants absconding or re-offending. There’s also lots of data at hand about who has fled justice or re-offended in the past to analyse.

“They are things where we need to take some fixed input and make a prediction about some outcome [and] on the basis of that prediction, we make some decisions,” said Mullainathan.

“Problems like these are everywhere and we just aren’t looking for them. But once we start, I think we can revolutionise a lot of these things.”

If one can predict “a very meaningful thing that is going to super important for the decision” then the technology could greatly improve outcomes but it can take a long time to work out the exact variable to predict. “This is where all these algorithms get screwed up,” said the professor.

Mullainathan explained that recent advances in machine intelligence were really only made possible by the ability to analyse vast amounts of data and it’s now possible to produce useful information from a multitude of often unexpected sources.

Algorithms can analyse satellite and aerial photography and gauge economic development through the number of twinkling lights at night, or predict crop yields before farmers even know if it’s going to be a good year. The pings back and forth between mobile phone towers can even be used to measure rainfall, and the list of data sources will only keep increasing.

“We now have, from just the raw mechanical application of machine intelligence, a transformation I think that’s going to take place for policy, which is: tons of new kinds of data,” Mullainathan explained.

“Text is data, images are data, audio files are data, everything on Twitter, everything on Facebook, everything that’s put forward from a radio station or on TV; we can just input all that and start processing it. It’s a crazy amount of data.”

The aim should not be matching human performance, he said, but to find tasks where predictive systems are even better. “We’re not the 100% performance ideal; far from it, we screw these things up all the time!”

IPAA (ACT Division) has published a full video of the event, which also featured Harvard public policy and corporate management professor Brigitte Madrian and Harvard Medical School assistant professor Ziad Obermeyer.

About the author

Any feedback or news tips? Here’s where to contact the relevant team.

The Mandarin Premium

Try Mandarin Premium for $4 a week.

Access all the in-depth briefings. New subscribers only.