At the intersection of two major Los Angeles freeways sits the county’s child abuse nerve center.
Scores of workers are scattered at partitioned desks on the incredibly large fifth floor of this tall, drab building. They work the phones, fighting through a crush of nearly 600 calls a day, more than 200,000 a year.
Jennie Feria supervises the Child Protection Hotline’s 150 social workers, who show up 24 hours a day, seven days a week.
She leans over a bulky phone riddled with buttons for a dozen or more lines. For the moment, it is silent.
“What people don’t realize is that it can take one and a half hours to generate a referral,” she says. “The referral looks very simple. However, it’s complicated to generate because the computer system is outdated.”
Hotline workers have to pull criminal and other records from a handful of databases, all while using a data management system that was operationalized in 1997.
Before taking over the hotline, Feria worked with the director of the county’s $2.2 billion Department of Children and Family Services on a project exploring predictive risk modeling to help gauge child abuse risk. The project, conducted by analytics giant SAS, used child welfare data to predict which children were most likely to suffer severe abuse or death.
While reports issued by the developer suggested that predictive risk modeling could identify potential victims with some accuracy, neither it nor any other predictive analytics tool has yet been deployed in Los Angeles.
When asked if using the technology at L.A.’s hotline would be beneficial, Feria says: “It would be tremendous.”
Beyond saving time for the countless hotline workers across the country who fielded as many as 4 million referrals of child maltreatment in 2015, predictive analytics promises to help this harried workforce better understand which children are in danger of abuse. This, proponents of the practice argue, would allow child protection systems to better target those cases that should be investigated and those that should not.
Potential profit has drawn data analytics giants like SAS, SAP, Oracle and IBM into the mix, all vying for child welfare’s tranche of public spending. The momentum with which predictive analytics has swept the field seems unstoppable, despite the protestations of critics who argue that it will justify ripping families apart because the data says so.
These are not new issues. The notion of mining vast data sets to help determine which children are most likely to be abused has been around for the better part of a decade. And predictive analytics has a longer history in the fields of medicine, fraud detection and insurance. In law enforcement, predictive analytics has brought with it concerns that the algorithms rely on biased data, which spit out biased risk scores that amount to little more than racial profiling supported by math.
But in the field of child welfare, where the state wields the power to sever mother from child, the stakes are especially awesome.
A misguided predictive analytics tool could consign certain parents to increased attention from the child protection system, which could result in their children being wrongly removed.
Conversely, to willfully obviate technology and information that could protect a child — and potentially absolve a parent from the undue perception that he or she is a child abuse risk — brings its own set of moral challenges.
There are two similar but distinct currents within the growing river of thought, technology and capital devoted to applying predictive analytics in child welfare. One is largely dominated by academics and public agency administrators, while the other is fueled by the big software and analytics firms.
The question is how will these two groups both harness predictive analytics, and check its power?
Not far from L.A.’s Child Protection Hotline, the University of Southern California’s green lawns and red bricks stand out against a neighborhood of wide streets and squat apartment buildings. In the School of Social Work, one can find the 4-year-old Children’s Data Network.
The network’s co-director, Emily Putnam-Hornstein, broke the mold on child welfare research in 2011 when she and colleagues at the University of California, Berkeley linked birth records to child welfare administrative data and deaths for every single baby born in California from 1999 to 2002. Of the 2 million babies born in that time frame, almost 300,000 were reported to have been abused or neglected by their fifth birthday.
Information available on a child’s birth certificate could be disturbingly telling. For example, 5.2 percent of all babies were substantiated victims of child abuse by their fifth birthday, but those who were born without a father named on a birth certificate were substantiated victims at three times the rate.
When Allegheny County, Pennsylvania, which encompasses Pittsburgh, looked to launch predictive analytics at its hotline in 2014, it contracted with a team led by Rhema Vaithianathan of the University of Auckland, New Zealand, that included Putnam-Hornstein. The project culminated in 2016 with the launch of Allegheny Family Screening Tool, which pulls data from the county’s pre-existing “Data Warehouse.” That data is crunched into a risk score that child abuse hotline workers use to decide which cases to investigate.
Part of the design of the project was the commissioning of an ethical review. After weighing various ethical challenges ranging from stigmatization to resource allocation, the review’s authors — Tim Dare, an ethicist from the University of Auckland and Eileen Gambril, a U.C. Berkeley professor who has written extensively on ethics in social work — decided that using the tool was “ethically appropriate.”
“Indeed, we believe that there are significant ethical issues in not using the most accurate predictive instrument,” they wrote.
With Allegheny’s predictive analytics tool up and running, the Children’s Data Network set its sights on California, the state with the largest foster care population in the country. In 2016, the data network won a grant from the California Department of Social Services (CDSS) and the Laura and John Arnold Foundation to develop a predictive analytics “proof of concept.”
The process chosen by the data network and CDSS relies heavily on engaging a wide range of stakeholders, including those who are most vehemently opposed to applying predictive analytics.
“We feel really strongly that child protection, particularly, is a very sensitive subject,” says Greg Rose, the CDSS deputy director overseeing the Children and Family Services Division. “It’s intrusive in people’s lives. We want to make sure that if we do anything with predictive risk or predictive analytics, that the community at large has a say in what we’re doing and understands what we’re doing because we want to avoid some of the pitfalls we’ve seen in other places.”
In Allegheny County, the tool is only used by the intake worker to help decide whether or not to send out an investigator. Investigators are not armed with risk scores before they knock on doors. This helps avoid “confirmation bias,” wherein the worker is influenced by the score to see something that isn’t there or not see something that is.
In both cases, the academics and public administrators are thoughtfully applying the brakes. Is the same true in big business?
The firm farthest down the path of infusing predictive analytics into the DNA of child welfare is SAS. Founded in 1976, SAS is now the planet’s largest privately held software company, posting more than $3 billion in profits and employing a global workforce of 14,000.
In 2015 the firm took a decidedly affirmative step in their bid to seal up the child abuse prediction market by hiring a man named Will Jones away from Eckerd Kids, a nonprofit providing child welfare services in several states. During Jones’ tenure at Eckerd’s Florida office, the agency developed a predictive analytics tool called Rapid Safety Feedback that began in Hillsborough County and is now entrenched in a number of other jurisdictions throughout the country.
“We’re at the very beginning of utilization of predictive analytics,” Jones said shortly after making the jump to SAS. “I think 15 years down the line, I’d like to see every state provider and every private provider using a tool.”
In 2016, Jones — now employed by SAS as its “child well-being industry consultant” — delivered a lengthy technical report to Florida’s Department of Children and Families. The report, which chronicles one of the most robust child maltreatment focused data-linkage projects ever undertaken, claimed that the firm had developed the strongest child abuse prediction algorithm to date by focusing on the many adults in a child’s life who could be a threat.
SAS linked child abuse records with birth and death records, alongside information on public assistance and mothers who had been involved in the state’s home visiting program. The firm’s data scientists looked at every child born in 2004 and 2005, and followed them until 2014, providing as much as 10 years of data.
The firm found that of the 291,499 adults in its study group who had one report for child maltreatment, 42 percent would be reported again within 10 years. Roughly 10 percent fell into what the SAS researchers call the “chronic maltreatment group,” those who had five or more maltreatment reports in the study period.
By mining these “perpetrator” networks, SAS says it was able to predict which adults were destined to become what it calls “chronic” perpetrators.
All of the information used by SAS is already available to Florida, Jones says. “We are not necessarily tapping into data sources they don’t have access to in some shape or form. We are simply making it more automated.”
Tim Dare, the primary author of the ethics review conducted for Allegheny County, doesn’t find Jones’ argument entirely satisfactory.
“The tools being developed allow those interrogating the data to spot correlations between data points, and to examine larger sets of data, in ways which were previously impossible,” Dare says. “The tools allow us to know things we could not [have] previously known, even if the data was in some sense available to us. Given that, it’s reasonable to think the tools create new ethical questions, even if they are looking only at pre-existing information.”
The Red Light
During a panel at a foster care “hackathon” hosted by the White House in 2016, the former commissioner of New York’s child welfare agency shared her concerns about predictive analytics.
“It scares the hell out of me,” Gladys Carrión said, inciting half-nervous chuckles in the room.
“What specifically scares you?” asked DJ Patil, former chief data scientist for the White House Office of Science and Technology Policy.
“I think about how we are impacting and infringing on people’s civil liberties,” Carrión said. She added that she runs a system “that exclusively serves black and brown children and families … I am concerned about widening the net under the guise that we are going to help them. How can we use these tools to keep children and families in communities together?”
“I am concerned about stigmatizing families,” she went on. “I don’t think this is the Holy Grail, but I do think it is an effective tool. So how do I mitigate that?”
The field has moved toward answering some of these ethical questions.
Back at L.A.’s Child Protection Hotline, one of Feria’s workers, Krystal Boulden, finishes a call. A foster youth had returned to his group home after going “AWOL.”
Boulden can pause, setting down her headset.
“There are times when there are so many calls coming in that we don’t have enough screeners to take the calls,” she says. “On our phones we will see a flashing red light that tells us there are calls in the queue waiting to be answered. When I see the red light flashing, it’s time to for us to get serious.”
She turns to her desk, and soon enough the phone is ringing, red light flashing.