
For decades, social workers investigating Los Angeles County parents accused of child abuse and neglect have relied on training, in-person interviews, consultations with supervisors and a straightforward, 16-item risk assessment to decide how cases should proceed.
But in recent months, county workers who decide whether or not kids should be removed from their homes have begun using a new, more high-powered tool. Like a dozen child welfare agencies across the country, the Department of Children and Family Services is now counting on an advanced algorithm to help identify children who may be in the greatest jeopardy, and which homes require heightened scrutiny.
In Los Angeles County, after social workers conclude that an investigation is warranted, the tool scans 313 data points to reach its conclusions. The most critical cases receive a “complex risk” flag — a determination signifying that without an intervention, children in those homes are likely to end up in foster care over the next two years.
Proponents say the algorithm will reduce human errors in judgment and better direct county-funded services to families most in need of government intervention to keep children safe.

But legal scholars, privacy advocates and civil rights activists have expressed grave concerns about relying on algorithms and artificial intelligence to monitor families. They say the stepped-up surveillance will draw more children into the troubled foster care system and exacerbate its stark racial and socioeconomic disparities.
L.A. County’s Department of Children and Family Services is the most recent child welfare agency to turn to such stepped-up analytics, already in use in states from Oregon to New York.
“Given the department’s track record, this is going to result in the systematic dismantling of more Black families,” said Chris Martin, a former parents’ attorney and organizer with Black Lives Matter – Los Angeles. “This is going to hurt Black families a lot more than it is going to help them.”
Emily Putnam-Hornstein, a professor at the University of North Carolina’s Chapel Hill School of Social Work who led development of L.A. County’s algorithm, maintains that failing to deploy tools like these is “a disservice to children and families in our community, who deserve a fair and equitable response from a system.”
“For a small number of very serious kinds of situations, if we make a practice error, we miss gathering information that’s potentially tragic.”
emily putnam-hornstein, university of north carolina professor who developed l.a. county’s child welfare algorithm model
In promoting her predictive analytics methods for child welfare systems nationwide, Putnam-Hornstein argues that better use of data on the “front end” of child welfare systems — hotlines flooded with a deluge of roughly 4 million child maltreatment reports each year — is urgently needed. She argues for “a skilled workforce equipped with information,” including the technology and training to focus on the most serious cases.

“The potential to miss kids who need services, or to over-intervene with a family who may not need an investigation, comes with real consequences,” she said in a social work conference last year. On the other end of the spectrum, she added: “For a small number of very serious kinds of situations, if we make a practice error, we miss gathering information that’s potentially tragic.”
In Los Angeles County — home to the largest local child welfare system in the nation — the Risk Stratified Supervision Protocol that Putnam-Hornstein developed concluded a three-month test run at three regional offices late last year.
More than 38,000 kids moved through the foster care system here last year, 82% of whom are Black or Latino. In the coming weeks, the county will decide whether the algorithm will be rolled out for use at all 20 child welfare offices.
So far, according to preliminary statistics shared with The Imprint by the Department of Children and Family Services, 49% of children in the Lancaster office who were the subject of investigations deemed complex-risk were Black, in an office where 42% of all investigations involve Black children. That stands in sharp contrast with the percentage of all children in that region, just 16.4% of whom are Black.
California quietly shuts down a predictive analytics pilot
This is not the first time that California child welfare officials have tested out a more data-driven approach.
In late 2016, the state launched a study of another predictive analytics tool for use in the child welfare system built by Putnam-Hornstein’s team — testing out whether an advanced algorithm would help social workers better triage cases at the early screening stage. The proposed new method involved risk calculations produced at record-speed when reports of abuse and neglect came through county hotlines. Its aim was to determine which households were most in need of an urgent in-home social worker investigation.
For about three years, county and state officials, child welfare industry leaders, data scientists, and advocates for kids and parents, monitored the Predictive Risk Modeling (PRM) project — evaluating it for potential use in systems involving roughly 83,000 children statewide each year. The professionals gathered in public meetings and received regular updates as the state reviewed the project. Once the review was complete, interested local governments planned to test out the new predictive analytics tool in their communities.
But California’s Department of Social Services quietly backed away from pursuing the new method of assessing risk, developed by Putnam-Hornstein and other researchers at the University of Southern California, along with Rhema Vaithianathan, a professor of social data analytics at the University of Queensland in New Zealand.
In November 2019, state officials notified a small group of county child welfare leaders and other stakeholders that it was shutting down its predictive analytics project. But they did not attach a document obtained last year by The Imprint that explained why.

In it, state social service analysts concluded that the tool — if used to rank households and identify those most in need of government intervention at the hotline stage of the child welfare system — would not keep kids safer, and could lead to racial profiling.
“Using PRM to guide hotline screening decisions may result in overlooked safety issues for many families,” the department’s research services branch stated in its report.
About 90% of cases ranked as “low risk” by the PRM model had safety threats, the report found, suggesting that families could be harmed by focusing on “long-term risk instead of immediate safety concerns.”
Ten months ago, The Imprint requested any and all documents related to the state’s predictive analytics project, which has been sent in batches since last June. But the state has yet to produce the report that was provided confidentially to a reporter.
Nonetheless, a spokesperson for the state Department of Social Services confirmed the report’s authenticity. Scott Murray stated in an email late last year that “the state’s research found that the PRM model being explored at that time did not align with the purpose of hotline screening and that the tool may miss immediate safety concerns in families.” Murray also noted that social workers must rely on immediate safety concerns — not the future or long-term risk of removal — when they decide whether to investigate an allegation of abuse or neglect.
“We can safely say that there’s been a whole lot of bias in our system.”
-Kathy Icenhower, CEO of Shields for Families
Officials also weighed an ethical review of the predictive analytics project in 2019. That led California data analysts to other grave conclusions, in a state with the most disproportionate share of Black foster children in the nation, according to the National Center on Juvenile Justice.
“The most worrisome critique of PRM is that it could continue racially biased decision-making in child welfare practice,” the document stated.
The state concluded that because the model relies on previous actions by the child welfare system to assess a parent’s current risk to a child — “and previous actions may have been influenced by racial bias” — using such data could perpetuate profiling. The impacts would fall most heavily on Black and Native American families, populations “who have historically been profiled as higher risk.”
What’s more, “PRM’s reliance on historical data likely introduces bias that cannot be fully eliminated, even if some variables are removed from the algorithm.”

Kathy Icenhower, CEO of the South Los Angeles-based Shields for Families and a member of the committee that reviewed the tool’s development, said the state’s conclusions revealed the algorithm to be “a horrible violation of families.”
“If they could have used that tool to link families to services in the community — not just open up a case on them — I think it could have been useful,” she said. “But I don’t know that there’s any way to safeguard its use to make sure that that happens. That makes it a risky proposition.”
Putnam-Hornstein maintains that the real-time model that was developed to predict future events was never designed to replace the current hotline tool, but she believes it would improve risk assessments and screening decisions for child welfare agencies.
Responding to racial equity critiques of her work, Putnam-Hornstein noted features she has incorporated in the algorithm created for L.A. County to help “mitigate bias,” including testing its impact and accuracy across racial and ethnic “sub groups,” and producing regular reports to track outcomes. But she also said she is “always eager to receive feedback and other ideas” to guard against it.
The contested use of algorithms in child welfare systems
The use of automated decision-making to predict human behavior is omnipresent in daily life, from algorithms that analyze shopping patterns to technology that spits out the odds of probationers’ risk of re-arrest.
For overburdened child welfare systems, the method offers a tantalizing option for prioritizing which cases should be considered the most urgent.
Agencies making those calls have to make a complex calculation. Social workers must avoid removing kids from homes where parents may simply be in need of help with addiction, housing or mental health treatment. But they are also held responsible when a child who could have been removed from home is severely harmed or — in the rare, but often most high-profile cases — killed by a caregiver.

Beverly “BJ” Walker, a consultant and former child welfare director in Georgia and Illinois, said agency leaders often lack tools that can help them make difficult decisions about how to intervene in the lives of families in crisis.
“What you wind up with is decisions that often don’t have enough science, enough grounding in anything except gut,” she said. “A lot of this is based on what you see or perceive with the naked eye — and predictive models are one way to improve on blindness.”
According to a recent report by the national office of the American Civil Liberties Union, child welfare agencies in at least 26 states and Washington, D.C., have considered using big-data tools, and at least 11 have deployed them. The agencies use the tools in a variety of ways. They include determining which children should be placed into foster care, zeroing in on geographical areas where child maltreatment is most likely to occur, and evaluating the likelihood of a successful family reunification. New York City’s child welfare agency uses predictive analytics for “quality-assurance reviews” on thousands of active investigations that are determined to involve the greatest likelihood of future harm to a child.
But civil rights attorneys and some scientists, among other critics, have raised alarm about the practice.
Aaron Horowitz is a data scientist who helped author the ACLU’s September report, “Family Surveillance by Algorithm: The Rapidly Spreading Tools Few Have Heard Of.”
At a November Fordham University conference, Horowitz said child welfare systems are only beginning to explore the use of algorithms. But after nearly a decade of use in the criminal justice field, he added, predictive analytics has not met expectations that it would tackle racial disparities as promised. That’s because the complexity of predictive analytics tools masks subjective decisions about what data is used and what is being predicted.
“There are a lot of decisions that are made along the way of building an algorithm that are values choices that are not just scientific or evidence-based choices,” he said.
There are also concerns about those who end up subjected to experimental risk assessment tools.
“Families embroiled in these systems are being used as guinea pigs,” said University of Baltimore law professor Michele Gilman, a former federal civil rights attorney who researches the intersection between digital technology and low-income communities. “They should be tested long before they’re deployed on vulnerable populations.”
Advocates for low-income communities who participated in the California predictive analytics advisory committee questioned the underlying data that drives child maltreatment risk scores — calling the data unfairly skewed. Children who come to the attention of the authorities are overwhelmingly poor, and the allegations against their parents mostly center on neglect, not physical or sexual abuse. In many parts of the country, as many as half of all Native American and Black children become the subject of a government investigation before age 18.
“We can safely say that there’s been a whole lot of bias in our system,” said Shields for Families CEO Icenhower. “If it’s the same data that we’re then using as a part of what we’re doing to predict the future, then are we really predicting anything other than continued bias?”
Big-data models raise reliability concerns
Skeptics also question the accuracy of methods that attempt to predict child maltreatment and other outcomes for families in crisis.
Last year, the Proceedings of the National Academy of Sciences published the results of its “Fragile Families Challenge,” a scientific contest comparing efforts to predict life trajectories for vulnerable children and families. One hundred and sixty research teams analyzing data from a longitudinal study involving thousands of families set out to predict several outcomes — including a child’s grade point average, household poverty and whether a family would be evicted.
Using a variety of machine-learning techniques, researchers concluded that nearly all of the algorithms had performed poorly.
“For policymakers considering using predictive models in settings such as criminal justice and child-protective services, these results raise a number of concerns,” the study’s authors warned. “Before using complex predictive models, we recommend that policymakers determine whether the achievable level of predictive accuracy is appropriate for the setting where the predictions will be used, whether complex models are more accurate than simple models or domain experts.”
University of North Carolina professor Putnam-Hornstein, the architect of the predictive risk models designed for both the state of California and Los Angeles County, is a well-known researcher in the child welfare field, with appointments at the University of Southern California and University of California, Berkeley. She and professor Vaithianathan have also created predictive analytics tools for child welfare systems in Pennsylvania and Colorado.
Before working on the California project, Putnam-Hornstein and Vaithianathan helped develop the Allegheny Family Screening Tool, used at the hotline stage in the county that includes Pittsburgh since 2016. Unlike the tool tested in California — which relies only on child welfare data — the tool built for Allegheny County Department of Human Services in Pennsylvania draws on numerous databases to weigh a parent’s risk of abusing a child, including criminal, medical, mental health, welfare and education records.
Last year, Allegheny County’s Department of Health Services began to use predictive analytics technology to screen potentially all newborns in the county. The Hello Baby program — also designed by Vaithianathan with help from Putnam-Hornstein — is not affiliated with child protective services, but screens all newborns for risk of future foster care placement before age 3, with parents’ consent.
An independent review for the county human services department by two professors — one from University College London and another from Chapin Hall at the University of Chicago, a top child welfare research institution — reviewed the planned design for Hello Baby and explored its ethical considerations. The 2020 review of Hello Baby resulted in 20 recommendations, the first of which encouraged local officials “to vigorously look for and invest in other ways to reach out to families in need of support services that do not rely on algorithmic systems.” The review’s second recommendation, now current practice, was for the county to “pledge that this predictive system be used to provide only voluntary supportive services, rather than to start investigations or to directly inform coercive powers.”
“A future predicted by today’s algorithms is predetermined to correspond to past inequalities.”
-Dorothy Roberts, professor of sociology and law at University of Pennsylvania
No universal screening project was ever proposed in California. But state officials here expressed concerns about any future use of “universal-level risk stratification,” the model now used in Allegheny County. The July 2019 social services document called the method “unethical” and said it “has no intention to use it now or in the future.” Identifying and “proactively targeting” families with no involvement in the child welfare system “is a violation of families’ privacy and their rights to parent as they see fit.”
Relying on that method, the state concluded, “would be an overreach in the roles and responsibilities of a government agency.”
But the prospect of algorithmic screening for potentially dangerous parents in the child welfare system has been well-received in Los Angeles County — a region reeling from highly publicized deaths of children the local agency was accused of overlooking, including a recent multi-part Netflix documentary.
The county first tested out predictive analytics nearly a decade ago. Following the death of several children at the hands of abusive parents, former Los Angeles County Department of Children and Family Services Director Philip Browning hired the software firm SAS to better identify risky households and prevent such tragedies. In a previous leadership position, Browning had contracted with the tech giant to create an algorithm that would root out welfare fraud at the county’s Department of Public Social Services.
In 2014, the SAS algorithm, dubbed the Approach to Understanding Risk Assessment, or AURA, mined 4,000 child welfare cases from a three-year period. Relying on child welfare system history as well law enforcement, mental health and substance abuse treatment records, AURA correctly identified the 171 cases where children died or were seriously injured. But it included in its highest-risk-for-harm category more than 3,800 other children — or 95% of the 4,000 cases — who had not experienced such an event.
“We must be very careful not to, in any way, indicate that AURA is a predictive tool,” a Los Angeles County Department of Children and Family Services spokesperson said in 2015. The county shuttered the project shortly thereafter.
Other attempts at predicting child fatalities and serious incidents have also come up short. In Illinois, Eckerd’s Rapid Safety Feedback program was discontinued in 2017, after thousands of children were falsely assigned to a list tagging them with a 90% or greater probability of death or injury, while, actual incidents of serious injury escaped the algorithm’s attention, according to Department of Children and Family Services, then headed by Walker. In 2017, the Chicago Tribune’s investigative team reported that “caseworkers were alarmed and overwhelmed by alerts as thousands of children were rated as needing urgent protection.” At the same time, the newspaper found, children in the system had died “with little warning from the predictive analytics software.”
L.A. tries predictive analytics, again
Fast-forward to 2020 in Los Angeles County. After more than a year of development, the new algorithmic tool had a “soft launch” on Aug. 2 in three offices. Local officials said the information helps social workers more quickly complete thorough child maltreatment investigations within the mandated 30-day timeframe.
In recent months, social workers in the Lancaster, Belvedere and Santa Fe Springs offices have received nightly computer-generated reports that place a “complex risk” flag next to the top 10% of investigations of parents whose children are most likely to return to the child welfare system. Information in the ranking system includes everything from whether a family has been investigated before and what issues brought them to the attention of authorities, to whether a child tested positive for drugs at birth or experienced domestic violence — even whether a parent was once in foster care as a child.

Bobby Cagle, the recently departed director of the county’s Department of Children and Family Services, said predictive analytics gives caseworkers and supervisors “another set of eyes, metaphorically.”
Unlike the model presented to the state, L.A. County’s risk tool is being used at a different point in the system — not at the decision-making stage when a call comes in to the child protection hotline, but later in a child’s case, after an investigation has begun. Local officials say supervisors can use the tool to match the highest-risk cases with the most experienced social workers, or to better tailor services for parents.
Data from the first 90 days of the Risk Stratified Supervision Protocol will be analyzed by an outside evaluation team, and findings will be publicly released early this year. The county also plans to deploy a “racial equity feedback loop,” using the algorithm to provide regular analyses on Black families and whether they would be better served by community-based organizations rather than the Department of Children and Family Services.
Like child welfare directors before him, Cagle said he believes deep dives into data could prevent cases the agency knew about “where we could have done something better.”
“My hope is that by giving social workers additional information up front,” he said, “we can cut down on these tragic situations.”
Cagle’s replacement, Acting Director Ginger Pryor, reiterated her support for the tool.
“It is imperative for us to have the most effective technology and tools available to ensure we make decisions that result in the best possible outcomes for children and families,” Pryor said in a statement this week.
Many social workers and child welfare advocates interviewed by The Imprint say they didn’t learn about the project until after it had been launched last summer. At a public meeting in September, social workers said the tool has helped them decide who is most in need of their services and attention. But with the project poised for wider expansion, some expressed concern that it is too soon to understand how use of the algorithm may impact struggling families.

Along with other local lawyers for families in the foster care system, activist and attorney Martin said “complex risk” alerts based on historic child welfare data could be used in court as additional evidence to separate Black children from their parents.
The concerns are shared by national scholars. In an October lecture, University of Pennsylvania sociology and law professor Dorothy Roberts said predictive analytics will not reduce, but only further embed bias deeper into the child welfare system, making it all the more difficult to root out.
“Not only are structural inequities coded into the data and algorithms,” she said. “A future predicted by today’s algorithms is predetermined to correspond to past inequalities.”
This story has been corrected to note that the 49% of Black children who were the subject of investigations deemed complex-risk were limited to those served by the Lancaster office. Some context has also been provided.