Indigenous policy evidence, where it exists, over-relies on anecdotal evidence

By Harley Dennett

June 5, 2017

It’s not just the volume of evaluation that is lacking in Indigenous programs, the way it is conducted is also flawed, says a new report from the libertarian-leaning think tank Centre for Independent Studies.

Just 6% of evaluations in Indigenous Affairs programs — Commonwealth, state, territory, and NGO — were of high quality, the report author Sara Hudson found. None of the evaluations analysed used the so-called ‘gold standard’ of evidence: random controlled trials.

“Overall, the evaluations were characterised by a lack of data and an over-reliance on anecdotal evidence,” Hudson said.

“What’s more, even when programs have been evaluated, government agencies have ignored them when making funding decisions or implementing new programs.”

Hudson cited Margaret Crawford’s recent audit of the New South Wales program evaluation initiative, which found the effort has little impact as the central agencies failed to take charge and didn’t tell their ministers the results anyway. “While other reports showed many organisations continued to receive funding to deliver programs even after evaluations had identified ‘serious deficiencies’ with them.”

“Organisations are more likely to engage with the evaluation process when it is presented as learning tool to improve program delivery than when presented as a review or audit of their performance”

It’s not just NSW that has re-prioritised evaluation in recent years. Governments across Australia have been reinvesting in evaluation, especially in Indigenous programs.

The APS got a rap over the knuckles last year when its head, Dr Martin Parkinson, lamented the lack of credible data in his own PM&C Indigenous Affairs portfolio. Shortly after, $40 million over four years was found for ‘robust, independent evaluation’ of Indigenous policy.

“Given that the average cost of an evaluation is $382,000, the extra $10 million a year will not go far,” Hudson said. “In fact, only 26 of the 1000 or so Indigenous programs funded by the federal government will be able to be formally evaluated.”

“Surely the government’s intention behind this increase in funding for evaluation of Indigenous programs is not to feather the nests of evaluators.”

Also in The MandarinGovt co-design ‘not an equal partnership’: Aboriginal health CEO

Co-accountability more important than more funding

To be effective with those costs, Hudson said, the government should encourage self-evaluation and have evaluation embedded into a program’s design as part of a continuous quality improvement process.

“Adopting a co-accountability approach to evaluation will ensure that both the government agency funding the program, and the program provider delivering the program, are held accountable for results.

“An overarching evaluation framework could assist with the different levels of outcomes expected over the life of the program and the various indicators needed to measure whether the program is meeting its objectives.

“Feedback loops and a process to escalate any concerns will help to ensure government and program providers monitor one another and program learnings are shared.”

Getting engagement: learning tools work better than performance audits

Why self-evaluation? Hudson noted that evidence suggests organisations are more likely to engage with the evaluation process when it is presented as learning tool to improve program delivery than when presented as a review or audit of their performance.

“This approach is different from traditional ideas of accountability, and involves moving away from simply monitoring and overseeing programs to supporting a learning and developmental approach to evaluation.

“Use of a reflective practice approach to evaluation relies on a two-way exchange, with the experiences of those delivering the program being used to inform its ongoing implementation.”

What good and bad evaluation looks like

Examples of poor evaluation reports included:

  • A health program in which 432 people participated but full screening data was available for only 34 individuals;
  • Only staff were interviewed, so data gathered was very subjective and none of the statements were backed up by any quantitative statistics or feedback from participants;
  • A program to reduce high rates of conductive hearing loss attributable to middle ear disease was not able to be assessed due to the lack of population level data; and
  • The lack of routinely collected data (such as lack of identification of Aboriginality in RTA road crashes) made it impossible to link improvements to the program.

Particular features of robust evaluations included:

  • A mixed-method design, which involved triangulation of qualitative and quantitative data and some economic components of the program such as the cost effectiveness;
  • Local input into design and implementation of the program to ensures program objectives matched community needs;
  • Clear and measurable objectives; and
  • Pre- and post-program data to measure impact.
CIS report. Source: Government websites, major philanthropic and NGO websites,
and programs listed on the Australian Indigenous HealthInfoNet.

About the author

Any feedback or news tips? Here’s where to contact the relevant team.

The Mandarin Premium

Try Mandarin Premium for $4 a week.

Access all the in-depth briefings. New subscribers only.