In his November 15 column, Chronicle Senior Editor John Kelly contested Richard Wexler’s argument that the failure of number crunchers to predict the results of the recent election casts doubt on the potential of the new predictive analytics tools to identify which children will be maltreated in the future.
I agree with Kelly that Wexler’s thesis is false. A predictive analytics tool is very different from the polling data used to forecast election results. And most importantly, as Kelly points out, the issue is not the tool, but how it is used.
To design the AURA tool that was developed for the State of California as a test of a predictive analytics approach, researchers identified deaths, near-deaths, and severe injuries (“AURA events”) among children who were referred to CPS within the previous six months. So they looked retrospectively at bad events that had already happened to see if they could have been predicted based on the circumstances of the families involved.
Researchers then correlated AURA events with a variety of child and family factors. These included child age, family composition, previous CPS reports, and the existence of earlier AURA events. Based on these correlations, they developed an algorithm, or formula, that provided a risk score for each child.
AURA turned out to be a powerful predictive tool. The ten percent of referrals with the highest risk scores accounted for 171 tragic events, 76 percent of the total number. If the families involved had received some sort of intervention (removal or case management), a significant number of tragedies could have been averted. Still, 24 percent of the AURA events would not have been predicted. That is what is meant by a false negative.
There would also have been over three thousand false positives, children who were predicted to be the victim of an AURA event when one did not happen, although some of these children might still be abused or neglected and need intervention. That’s why, as Kelly points out, we would not remove a child from parents because of predicted abuse. But we might want to do something less drastic, like offering case management and supportive services, which might help the parents improve their care and also keep the system’s eyes on the child.
Even the Strategic Decision Making (SDM) actuarial assessments being used in most states today, while not as advanced as AURA, already provide accurate risk assessments in many cases. In my last column, I wrote about Yonatan Aguilar, who was left at home without case management or services after he was assessed to be at high risk of maltreatment. He was subsequently confined to a closet, where he lived the remainder of his short life of hunger and abuse.
Richard Wexler seems to worry only that children will be removed because they are deemed at high risk of maltreatment. In the current child welfare climate, favoring family preservation above all else, I’m more worried about more Yonatan Aguilars.
Both types of bad outcomes can be prevented with good policy and practice. Nobody is recommending removing a child who is not in imminent danger of serious harm. But to refuse to use new methods that could save children’s lives on the grounds that they could be misused seems to me an unethical choice. And the relationship of this controversy with the election results seems, as Kelly puts it, tangential at best.