Nothing shakes the fabric of a child protection agency like a death caused by abuse or neglect in a family that it knew about. To the surrounding world, the agency has failed in its primary mission, keeping a child safe.
There are cries for accountability. The media excoriates the agency for its failure. Legislators hold hearings. Blue Ribbon Panels are convened. Critical Incident Review Teams are called into action. Caseworkers cower in fear of being tied to the whipping post, or worse.
Directors often are replaced. Recommendations are brought forth. Some are implemented while others go waiting. And then … there is a lull until the next tragedy, when this cycle is repeated all over again.
Erik Hollnagel, in his book Safety-I and Safety-II, details the evolution of safety thinking and explains why “Safety-I” – the methods of the past such as root cause analysis – with its reliance on human error causality, no longer offer a fit with the complexity of today’s child protection systems.
Why does this misfit exist? Safety-I assumes a linearity between cause and effect. This approach derives from machine technologies where there are reliable and predictable relationships between components, a predictable “if this, then that” world.
But the core methods used by child protective agencies to achieve outcomes rely almost exclusively on human social interactions, which are highly variable and involve constantly changing dynamics. In such cases, outcomes are emergent, better understood as patterns in motion influenced by surrounding conditions, and are not explainable by the linear methods of decomposition used in Safety-I.
This is partly why what has been learned from Safety-I type investigations of maltreatment deaths has not demonstrated clear evidence of success in preventing future deaths. Another principle reason for this failure is the near complete focus on human error as the explanation for system failure. In reality, that is an incomplete explanation of critical incident causality.
Human error findings mostly fail to address the organizational context of performance. The turmoil created by singling out humans for blame and shame following a death damages the child protection agency’s social fabric necessary for getting the work done and undermines organizational resilience rather than build it. This consequentially leads to an inability to sustain any progress even when short-term gains are realized.
Blaming human error does not get rid of the conditions that gave rise to the trouble humans got into. What is needed to learn from a tragic incident is an understanding of why people did what they did, and why it made sense for people to do what they did. Human error is often a symptom of deeper trouble in the organization, not the cause. It is systemically connected to features of people’s tools, tasks and working environment.
Another failing of our current approach to managing safety is not appreciating that more things go right than go wrong. Hollnagel suggests we learn from our successes, not just our mistakes. Leaders with experience in the field understand that the front line gets work done, often in spite of political leadership and organizational management and in spite of economic and workload constraints.
It is also clear to all who do this work that the search for human error will result in more regulations and policies, all recommendations designed to satisfy an accountability-thirsty crowd. There is a long distance between top management and the field, and it is more than just in terms of miles. To fully understand how critical incidents occur one must understand the adaptive nature of work-as-done and not just deviations from work-as-imagined.
A place to start in the effort to improve safety is with the front-line staff who do the work. Start with a view that there is a wisdom in the field and that there is much to be learned from the many things that go right. The efforts they make to work around the lack of resources, time and knowledge offers extremely valuable information for system improvement.
Talk with them, begin your inquiries with them and seek their advice as how to do this work better. Learn from the people who were inside the tunnel without knowledge of the outcome when they had to decide and act. Hindsight causes people to differently evaluate decisions and processes after the outcome is known.
Focusing on what goes right and how to make as many things as possible go right builds resilience. As David Woods et. al., in Behind Human Error note, resilience engineering focuses on the ability of systems to recognize, adapt to and absorb disruptions and disturbances, especially those that challenge the base capabilities of the system. In contrast, focusing on human error destroys confidence, builds defensiveness and leads to concealment of information critical to understanding of how the system works, of how its environment develops and changes, and of how functions may depend on and affect each other.
It is still necessary to remain sensitive to the possibility of failure, or to have a constant sense of unease. This can be done by first trying to think of – or even make a list of – undesirable situations and imagine how they may occur. And then to think of ways in which they can either be prevented from happening or be recognized and responded to as they happen. Such thinking is essential to proactive safety management, but it must be done continuously, not just once.
An organization can only remain sensitive to the possibility of failure if everyone pays close attention to how the social and relational infrastructure of the organization are shaped. Being sensitive to the possibility of failure allows the organization to anticipate and thereby prevent the compounding of small problems or failures by pointing to small adjustments that can dampen potentially harmful combinations of performance variability. As Hollnagel states: “Many adverse outcomes stem from the opportunistic aggregation of short cuts in combination with inadequate process supervision or hazard identification.”
As Sidney Dekker in the Field Guide to Understanding ‘Human Error’ observes, human error is the starting point for analysis, not the ending point. If we are going to investigate agency-involved child maltreatment fatalities as an organizational phenomenon, we must move away from notions of simple linear causality and human error.
Tom Morton is the former president of the Child Welfare Institute, and former director of the Clark County Department of Family Services in Nevada. Jess McDonald is the former child welfare director for the state of Illinois.