The Florida Department of Juvenile Justice is set to become the first state agency to conduct risk assessments using predictive analytics, a process that uses huge collections of data to predict outcomes and patterns.
It’s the sort of news that, on its face, excites at best a handful of stat geeks. But the decision could have major implications for providers of community-based juvenile justice services.
The research that prompted Florida’s decision suggests that significantly fewer juveniles would be slated for residential placements or incarceration under the new process. The new predictive model has superior ability to predict recidivism while also designating far fewer youth in risk categories that often lead to out-of-home placements.
The planned move appears to pave the way for reduced incarceration and increased spending on other alternatives without increased risk to the community.
From the paper co-written by the agency and a company called Algorhythm, analyzing the Florida research:
“Sizable reductions in residential care placements could lead to significant short-and long-term cost savings for the state and/or a freeing up of financial resources to be re-invested in preventative or community-based care.”
“The use of predictive analytics and machine learning will be a game changer in juvenile justice risk assessment,” said Algorhythm consultant Ira Schwartz, who led the federal Office of Juvenile Justice and Delinquency Prevention from 1979 to 1981. “Imagine what implications this has around the country.”
Risk, Scoring and Prediction
Risk assessments have been a part of juvenile justice decision-making since the 1970s. The first iterations involved highly subjective processes that hinged mostly on the view of the assessor or a review of a young person’s criminal history.
It is in the past 20 years that juvenile risk assessments have evolved into actuarial tools, which survey for answers about dozens of factors to establish a risk score. That score is designed to help probation departments and judges handle youth appropriately in the juvenile justice system.
Predictive analytics is a burgeoning corner of the business technology industry that uses current and historical data to predict future behavior. For business purposes, predictive models are built to interpret large sets of data and produce a probability. This is applied to anything from individual credit scores to customer purchasing to vehicle fleet management.
Several child welfare agencies, including the Florida Department of Children and Families, have embraced predictive risk modeling to inform responses to abuse and neglect cases.
The Florida Department of Juvenile Justice (DJJ) Office of Research and Data Integrity started to use predictive analytics in 2010, but only for the purposes of research and evaluation. Office Chief Mark Greenwald had to assure stakeholders at the time that DJJ was not going to replace the state’s risk assessment tool – the Positive Achievement Change Tool (PACT) – with any predictive model.
“I think the concern earlier was the minority report aspect” of predictive analytics, Greenwald said, the idea that “we would give judges something saying, ‘This is gonna happen, so do this.’”
The method still has its skeptics when it comes to dictating process.
“Predictive Analytics, more than anything, is an approach to learning,” said Jesse Russell, director of research for the National Council on Crime and Delinquency, which sells its own juvenile and child welfare risk assessment tool called Structured Decision Making (SDM). “We do predictive analytics in the development of our actuarial model,” Russell added.“But we still use an actuarial model.”
He gives two reasons why NCCD stands by an actuarial model. First is the “black box” aura around predictive analytics, the notion that a computer is taking all kinds of data points and processing an answer. The “freeing up of restrictions on how you work with data means you don’t have to understand whether things are causal,” he said.
Second is the removal of human involvement. “We have stood by the idea that the engagement that an actuarial tool drives is critical, that you have to seek out answers,” Russell said.
Greenwald and DJJ have been convinced otherwise. The agency intends to continue using its PACT questions in the future, but will move away from its traditional score-based format, which was developed in partnership with Assessments.com in 2005.
Instead, the information gathered will be routed through a predictive model known as a “decision tree.” The decision comes after research found that a predictive model yielded greater accuracy using the same information as the current PACT scoring process.
Algorhythm, a New York company that builds information technology for social services providers, amassed a control study using five years of DJJ data. Thousands of cases were put through the decision-tree predictive system to identify differences in both classification and accuracy.
From the paper written by DJJ and Algorhythm, shared with The Imprint before its publication:
“More youth were placed into appropriate risk levels, and recidivism prediction accuracy was increased. And, the placement of more youth into the lower risk levels was not accompanied by significant increases in the overall recidivism rates for the youth in these categories. This suggests that public safety would not be compromised if DJJ deployed the predictive analytics and machine learning algorithms.”
The change will make Florida DJJ the first juvenile justice agency to employ predictive analytics in frontline decision-making. Greenwald says there is no set timeline for the changeover, but that it will happen.
“The difference with predictive is it’s substantially more complex,” Greenwald said. “It’s a more advanced way of doing the scoring.”
He made the analogy of driving on the same highway in a car with a better engine. “There’s nothing wrong with [PACT],” Greenwald said. “There’s still an algorithim behind it. We’ll still use an actuarial instrument, we’ll just have enhanced information behind it.”
More Demand for Community-Based Providers
The shift toward a predictive risk assessment would not change decision-making for youth receiving those classifications. But the research that prompted this change suggests that under the new assessment, far fewer youth will find their way into the high and moderate-high risk categories.
Florida assigns juvenile offenders to four risk categories: low, moderate, moderate-high, and high. Youth classified as low- or moderate-risk are either diverted or assigned to a low-level intervention like probation.
Youth in the high-risk category are often placed in residential programs or incarcerated, Greenwald said, “provided that there was an attempt at community-based services first.”
Youth classified as “moderate-high” risk “could go either way,” Greenwald said.
The report includes a controlled comparison of the current PACT process and the predictive model. Both classified 72 percent of the youth as “low” risk level.
But the predictive analytics model classified 19 percent of the youth as “moderate,” compared with just 11 percent under the current model. The predictive model set 8 percent of youth to the two higher categories, and the current model set 17 percent.
NCCD’s Russell said that the greater number of youth in low-risk categories is not, in and of itself, persuasive from an accuracy standpoint.
“Any tool can be made to produce any distribution you want,” Russell said. “If that’s something they wanted to achieve and they achieved it, great.”
And, he suggested, higher accuracy can always be achieved by classifying fewer youth in higher-risk categories because, on the aggregate, most of the youth who have contact with the juvenile justice system do not return.
“I could be 80 percent accurate by saying ‘None of them are coming back,’” Russell said. “But how useful would that be?”
That assertion seems to be countered somewhat by the fact that the predictive model was more accurate within each risk category, and significantly more accurate in the higher-risk categories. The predictive model was 52 percent accurate in the moderate-high category (37 percent for the current tool), and 66 percent accurate in the high-risk category (41 percent for the current tool).
John Kelly is senior editor for The Imprint.