A column in The Imprint last month begins by quoting an odd claim made by David Sanders, in the preface to the report of a commission he chaired. He wrote: “Child protection is perhaps the only field where some child deaths are assumed to be inevitable, no matter how hard we work to stop them.”
Really? I know of no fire chief who has claimed it’s possible to prevent any child from ever dying in a fire. I know of no police chief who says we can ensure that no child ever will be shot to death on the street. I know of no doctor who promises that no child ever will die of cancer.
It is always the goal. But it is one where every other field recognizes our reach will exceed our grasp. Suggesting otherwise is a sign not of nobility but of hubris.
The column in question touts the value of applying what its authors call “safety science” to child welfare. They are vague about what this means. But, based on the report of the commission Sanders chaired, which the authors cite as their model, there is nothing new and nothing scientific about it in the context of child welfare.
It’s just a label to sanitize family policing systems’ (a more accurate phrase than “child welfare systems”) standard response to tragedy: extrapolate from events that are as rare as they are horrific and impose broad-brush “solutions” that lead to more surveillance of impoverished families and more removal of their children. These solutions do nothing to achieve the zero-fatality goal, and can even take systems farther from it.
The column authors are members of something they call the National Partnership for Child Safety. In a press release, the partnership says its goal is to “further key recommendations and findings” from the report of that commission Sanders chaired, the Commission to Eliminate Child Abuse and Neglect Fatalities (CECANF), (a report my organization critiqued in detail here). So that’s where we should look to see what safety science is all about.
And that’s where we see a crucial problem: The report, and the column’s authors, emphasize that safety science has worked in aviation. But children are not airplanes.
For starters, in the airline industry, there are a million frustrations and inconveniences but only one kind of horror: a crash. So, for example, if investigators conclude from three crashes that there is a deadly flaw in the design of the Boeing 737-Max, and pilots were not properly trained to fly it, authorities can order all such planes grounded until the problems are fixed and the worst that will happen is some canceled flights.
But in family policing there are interlocking horrors and the solution to one can exacerbate another.
So, for example, CECANF’s core recommendation, known among commissioners as the “surge” or the “accelerant” — until they decided that was bad public relations — urges states to review every child abuse fatality during the past five years. Then, if they find even one common “risk factor,” states would re-investigate every open case that has that one risk factor.
The driving force behind the commission, Michael Petit, passionately declared that the surge will allow caseworkers to go into homes to determine “who among these children is going to be killed.” Talk about hubris!
Data and policy scientists say otherwise. When Child Trends listed “5 Myths about Child Maltreatment” the No. 1 myth was “We can predict, with certainty, which children will be maltreated based on risk factors.” And in its official response to the commission report, the Department of Health and Human Services warned:
In small states, a single incident rather than a systemic issue can dramatically affect annual statistics. In addition, in small states an analysis of data from the past five years … would include too few cases to draw definitive conclusions.
The commission neglected to estimate how many children might be affected by such reinvestigations, but it’s probably in the hundreds of thousands. All these children will be subject to reliving the trauma of the initial investigation. Some almost certainly will be subjected to needless foster care, which itself can be dangerous for kids.
In “child welfare,” we’re not talking about inflicting the inconvenience of canceled flights, we’re talking about inflicting the trauma of canceled childhoods. At the same time, all those workers reinvestigating old cases will have less time to investigate new cases, so the investigations will be more cursory, increasing the danger that some of the very few children in real danger will be missed.
This is not speculation. There actually was such a surge in 1995, in Connecticut. It was ordered by the governor after three deaths of children “known-to-the-system.” Foster care skyrocketed — but child abuse deaths increased.
So surely any true disciple of “safety science” would understand the real lesson: Don’t do a “surge” — it backfires. But if safety science is just a fancy label to justify reigning down more trauma upon overwhelmingly poor, disproportionately nonwhite families, then you say that a surge will tell us “who among these children is going to be killed” and do it anyway.
There are other problems inherent in analogizing airplanes to children. With plane crashes, the problem often is mechanical. It’s not a matter of complex human interactions and subjective decision-making where it may not be possible to pinpoint any one key error that led to tragedy.
Another crucial difference: Everybody knows that air travel is the safest form of transportation. Even people who are afraid to fly know it. So after a plane crash, no one suggests abolishing air travel. In child welfare, the safest approach almost always is family preservation. But more than 50 years of misrepresenting the true nature and scope of the problem in the name of “raising awareness” — has conditioned most Americans to believe the “Big Lie” of American child welfare: that family preservation is inherently risky while foster care, for all its flaws, supposedly is “safe.”
So when your examination of system failings is limited to one kind of horror story, it encourages a foster care panic; even more children are torn from everyone they know and love and consigned to the chaos of foster care.
In an industry like child welfare, genuine safety science demands a method that can spot the errors in all directions, including wrongful removal. There is such a method: random sampling, in which independent reviewers examine a large sample of cases, chosen at random, and assess what went wrong, and what went right.
At my organization’s suggestion, Kevin Ryan took this approach when he was New Jersey’s “Child Advocate” in 2006. The result was a comprehensive examination of cases where the state was right to place children in foster care — and where it was wrong. That’s how real science works.
For random sampling to work, two things have to happen: You need a large enough sample to be sure you can generalize from the findings. And you need a breadth of viewpoints and expertise among the reviewers. The federal government’s Child and Family Services Reviews include sampling, but its process meets neither of these criterion.
But “safety science” gives those who don’t want to know about wrongful removal a new excuse, as can be seen in a state moving full-speed backwards.
Maine was briefly a national leader in reforming child welfare. But from 2017 through 2019, in the wake of deaths of children “known to the system,” the number of children torn from their homes skyrocketed by more than 50%. The deaths did not stop.
One person who saw no problem with this is Maine’s “Child Advocate,” Christine Alberi. She told a legislative committee she sees no wrongful removal and no sign of foster care panic. But then, she hasn’t looked.
Alberi only investigates cases if there is a complaint and, crucially, only if she chooses to accept the complaint for investigation. So — surprise! — in her reports caseworkers are praised only when they demand more foster care, slower reunification or more surveillance of families — they are criticized for just about everything else.
At that same legislative hearing, Alberi was asked if it wouldn’t be better to examine a random sample of cases. She said no; reviewing three cases or even just one, is plenty. She knows this, she said, because experts in “safety science” told her so.