Predictive risk modeling (PRM) offers new and exciting chances to solve big, entrenched problems.
In child welfare, one of those problems is accurately identifying children at risk of maltreatment, work that requires a gauge of not only immediate risk, but also the future likelihood of harm.
While clinicians or social workers might be good at identifying people in immediate danger, recognizing often complex patterns of “long arc” risk is much more difficult.
PRM enables child welfare staff to identify earlier those individuals who are at long-arc risk of adverse outcomes and help them avoid the adverse event.
Transforming the promise of PRM into tangible change on the front lines of child welfare practice is a new, challenging and rewarding adventure for researchers and agencies.
Predictive risk models are similar to the clinical safety and risk checklists used in some jurisdictions in that they aim to help workers make better decisions, but there is a crucial difference. Checklists rely on inputs from the caseworker, whereas PRMs are fully automated and only use existing data.
I recently led the build and implementation of the Allegheny Family Screening Tool, a child welfare PRM for Allegheny County, Pennsylvania. It provides call screeners with a screening score from one (lowest risk) to 20 (highest risk). The score indicates long-arc risk of adverse outcomes (placement in care or re-referral).
Importantly, while screening scores are generated automatically, decisions are not. In Allegheny County, the screening score has become an added ingredient in the human decision-making process.
Agencies might want to explore an Allegheny-type PRM to address concerns they are screening in too many low-risk families or screening out too many high-risk families. For example, Allegheny County was screening out around 25 percent of the highest risk cases and the leadership team was convinced that using data could help improve this.
By creating an objective “signal of risk” for each call, a PRM can also help call screening supervisors identify calls that need more oversight, training, and support and assist efforts to reduce inter-worker variation (inconsistency).
Since completing the tool in Allegheny County, which went live in August 2016, I have gone on to work on exploratory PRM projects (still in development) in Douglas County in Colorado and in California. Here are five lessons I have learned along the way.
Fully Integrated Data is Not Necessary
Along with my colleague Emily Putnam-Hornstein from the Children’s Data Network at the University of Southern California, I have discovered that we can build an accurate and useful predictive model without fully integrated data.
So long as we can access a comprehensive, state-level child welfare data set with sufficient historical information, we can build an adequate predictive model.
Children in the U.S. tend to have high rates of contact with child welfare systems. Since about one in three children has some contact before age 18, there is a high chance the system data will hold relevant and useful history for a given individual.
State-wide data is most usable because it offers a very large set of records and removes the problem of partial data for people who move across county lines.
While we built the Allegheny Family Screening Tool using the county’s world-class integrated data system, this sort of linked, cross-sector data is the exception, not the rule, in the U.S. Larger state-level child welfare data sets provide a viable (and possibly more cost-effective) option.
Frontline Practice and Priorities Must Lead
Getting to the heart of frontline priorities is a prerequisite to success.
While PRM is very flexible and can be used at a number of points during a case, from referral to placement and beyond, not all possible uses will be ethical or desirable.
Each model is built for a specific use and for a specific state or jurisdiction, and will be validated accordingly. So before embarking on building a PRM, it is important for the leadership of the county or state to set parameters on how it will – and will not – be used.
Established practice can run deeper than an agency – and certainly a researcher – is aware. So even a tool that is revolutionary on paper will not necessarily transform practice overnight. Rather than looking for high levels of change in frontline practice within a short time frame (say, monthly), we should look for a trend of continuous change in the right direction.
Ethics and Transparency are Never “Done”
Ensuring governance and leadership around ethical considerations is not a one-off “tick the box” exercise. Ethical governance needs to be built into the agency for the lifetime of the tool; regular ethical reviews are essential for the maintenance of community support.
Transparency is another concern that will last as long as the project. It starts with engaging people potentially subject to and affected by the tool, and listening and responding to their concerns. As the project continues, transparency should be revisited often to make sure that the tool is understandable to the community, agency and frontline workers. If it is not transparent, it is hard to gain necessary trust and support.
Expect Methodology to Evolve
A natural evolution of methodology should be expected and encouraged up to and after the implementation of a model. Looking carefully at the performance and usefulness of the model as it takes shape should cause a regular review of the choice of methodology. For example, in Allegheny County, we started out using a standard logistic regression approach but found through experimentation that a hybrid approach, using a variety of machine-learning techniques, delivered more accurate scores with minimal loss of transparency.
Independent Evaluation Sharpens the Focus
The fact that a predictive model will be independently evaluated helps to build trust and support for the project. I have also noticed that committing to an independent evaluation forces researchers and the agency to be clear about what the tool is setting out to achieve from the start, creating an agreed-upon measure of success.
Rhema Vaithianathan is co-director of the Centre for Social Data Analytics at Auckland University of Technology, New Zealand, where she is also a professor of economics. She leads the international research team that developed the Allegheny Family Screening Tool and is currently working with collaborators to develop proof-of-concept risk modeling projects in other U.S. states.
Vaithianathan and several of her collaborators will present an in-depth briefing on predictive analytics for child welfare systems in the U.S. as part of a dedicated panel discussion at the Metrolab Network Annual Summit on Thursday, September 14, in Atlanta.