
This week, the Biden administration released a proposed set of guidelines designed to limit the harms caused by artificial intelligence and algorithmic tools that are increasingly used by private companies and government entities, including child welfare agencies.
Released by the White House Office of Science and Technology Policy, the “Blueprint For an AI Bill Of Rights: Making Automated Systems Work For The American People” outlines five “common sense protections” for algorithms and other automated data use.
As these big data technologies have proliferated in recent years, their use has been linked to a loss of privacy, an increase in surveillance and racial bias — which collectively “are threatening the rights of millions and hurting people in historically marginalized communities,” a White House blog post reads.
The blueprint highlights errant facial recognition tools that have resulted in wrongful arrests, algorithms that discriminate against loan seekers who attend a Historically Black College or University and automated data processes that have limited opportunities for women job seekers.
“Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services,” the report states.
Alondra Nelson, deputy director for science and society at the White House Office of Science and Technology Policy, told The Associated Press that better standards for the use of data and automated systems were needed across all sectors of society, in order “to really put equity at the center and civil rights at the center of the ways that we make and use and govern technologies.”
She added: “We can and should expect better and demand better from our technologies.”
Use of these tools in the child welfare field is becoming increasingly ubiquitous. Child welfare agencies in at least 26 states and Washington, D.C., have considered the use of risk-assessment tools, according to a recent report by the American Civil Liberties Union.
In at least 11 states, child welfare agencies have employed algorithms to predict which children are at higher risk of maltreatment by their parents, triage cases for review, and improve operations. For example, the Allegheny County Department of Human Services in Pennsylvania is using a tool to screen referrals for abuse and neglect that come into a hotline, while in Los Angeles County, an algorithm tells social workers which parents have “complex risk.”
The white paper has one specific mention of the use of advanced data tools by child welfare systems, suggesting that social workers and families involved in child maltreatment investigations deserve more information about the automated processes that may shape their cases.
“The lack of notice or an explanation makes it harder for those performing child maltreatment assessments to validate the risk assessment and denies parents knowledge that could help them contest a decision,” the blueprint reads.
Carnegie Mellon University researcher Logan Stapleton, who has studied racial disparities in the Allegheny County Family Screening Tool, called the bill of rights “much needed” to help inform a possible path toward the regulation of automated decision-making tools.
“It kind of feels like the Wild West at this point,” Stapleton told Youth Services Insider.
He hopes the bill of rights will help, but to date, child welfare systems have had little accountability over how they use data, including advanced tools or more basic databases like child abuse registries.
“Until there’s specific policies that limit data usage and data collection and the kinds of surveillance technologies that can be built using that data, I don’t know that things will change immediately,” Stapleton said. “But this seems like it could be a step in this in the right direction.”
In an effort to limit the impact of algorithmic discrimination, the white paper calls for the inclusion of five non-binding principles within all artificial intelligence and automated systems, including protection from unsafe or ineffective systems, safeguards against algorithmic discrimination, data privacy, proper notice and explanation of the use of automated systems, and the ability to opt out of these tools with real-world assistance from a person.
The year-long process to develop the blueprint included interviews and feedback from dozens of researchers, tech companies like Microsoft and IBM, civil society advocates, government officials and other parties.