According to documents released in late July, New Zealand has halted a proposed study of a new child abuse screening tool, which relies on big data to discern risk of maltreatment.
The tool, called predictive modeling, is being developed for use in the island nation’s Child, Youth and Family (CYF) call center, which responds to 147,000 calls of suspected child abuse a year. It would look at data tracking a family’s contact with various public systems – ranging from health to child welfare – to help determine which children are at the highest risk of being maltreated.
The proposed study would have assigned a risk score to the roughly 60,000 children born in New Zealand in 2015. Researchers would then test how many of the high-risk children were, in fact, maltreated. The study would be for informational purposes alone. Social workers would not use the risk scores to change how they respond to allegations of abuse. High ranking politicians and also the New Zealand press have taken the issue head on, voicing concerns that such a study would create an ethical dilemma wherein the government would know who high-risk children were, but not intervene until a call of suspected child abuse was made.
“These are children, not lab rats,” Social Development Minister Anne Tolley wrote in the margins of a predictive modeling briefing she received late last year.
Tolley, an elected member of Parliament, is the top politician responsible for child protective services and welfare. She has been criticized by opposition spokesperson Jacinda Ardern for developing predictive analytics to screen child abuse.
News of the curtailment of the study jolted through New Zealand, with coverage in the country’s four major newspapers and a heated exchange in Parliament between a high-ranking opposition party politician and the governing National Party.
A trove of newly published Ministry of Social Development (MSD) documents reveal that the proposed study design was squarely rejected by Tolley in November 2014. MSD is the largest public agency in New Zealand, responsible for welfare and pensions as well as child protective services.
Responding to a cross-agency predictive modeling working group briefing on next steps for the tool, Tolley strongly opposed the observational study.
The predictive modeling tool is not currently in use. It has been developed and tested on existing data by the MSD and researchers from the University of Auckland. The research has been backward-looking, meaning the risk scores are assigned after the maltreatment has occurred. The first paper describing how big data can be used in child abuse prevention was published in 2012, with a follow-up study published in 2015.
The tool assigns risk scores to children based on 132 data points the government has on file. These data points include the caregiver’s age, how many times the family has changed address and whether the parent is single.
The briefing report with Tolley’s annotations was first revealed by Fairfax Media last week, which led to a heated exchange in Parliament. Opposition party spokesperson Jacinda Ardern questioned who gave officials in MSD permission to plan the observational study.
“Who could have told officials to go ahead?” Ardern asked. “[Tolley’s] notes show she asked officials to ‘stop phase 2 planning immediately and talk to me!’”
UNICEF New Zealand National Advocacy Manager Deborah Morris-Travers said that assigning risk scores to newborns and waiting to see what happens would be a “gross breach of human rights.”
The observational study would not have changed current triage practice. Currently, calls received by CYF are assigned based on social worker judgment alone. Nevertheless, the idea of assigning a risk score and not acting on this new information made UNICEF New Zealand uneasy.
“We were really concerned at the proposal, that the model might be applied to newborns and then no intervention occurring, even if there was a sense that the baby was at risk,” Morris-Travers said.
Dr. Rhema Vaithianathan is the researcher who led the initial development of the predictive modeling tool. Vaithianathan did not think the tool needed further validation and did not understand why MSD proposed an observational study.
“I would not have done that,” Vaithianathan said in a phone interview.
Vaithianathan said that she could sympathize with Tolley’s dilemma, but was also frustrated with slow progress.
“If you use this at all, are you culpable?” she asked. “However, the other question to ask is: are we culpable if we know if risk scores are available to use and we choose not to use them?”
The political fallout and media headlines concerning the predictive modeling tool show the sensitivity around new analytical techniques for screening child abuse.
Concerns include data privacy, stigma, and how to manage disproportionately higher risk scores that beneficiaries and solo mothers may receive. Even though race is not a category considered by the tool, Māori children are likely to be disproportionately categorized as high risk because Māori children disproportionately come from families receiving government welfare services.
Exploring these issues, MSD has presented a report to a group of information security specialists, prepared a review of Māori ethical issues and issued a privacy impact assessment. These papers are available at the MSD website. MSD has also released an ethical review of the tool conducted by University of Auckland Associate Professor Tim Dare.
Responding to the Minister’s rejection of the study, Dare denounced the Minister’s “inflammatory” opposition in the newspaper The Dominion Post.
What concerned Dare was not that the study did not go ahead, but how quickly a few handwritten notes by a politician could derail research to better inform social policy.
“Science collided with politics, and politics won,” Dare wrote.
Darian Woods is a Master of Public Policy candidate 2016 at the Goldman School of Public Policy at University of California, Berkeley. He is an editor at the School’s PolicyMatters Journal.