Though I am barely literate in technology, I believe that the unfettered advancement of artificial intelligence (AI) into child welfare has dangerous implications for how we interact with families. I’m wondering how our profession is going to behave when the bells and whistles of technology draw us towards new, shiny objects and we have little or no criteria about which technology is helpful or hurtful.
We are learning more about how AI might impact our lives, including human service settings. The short version: it’s frightening. Even the so-called godparents of AI fear its implications. In the meantime, the AI predecessors are already in our agencies, further destabilizing the fragile relationships we have with families and residents of the communities we should be serving.

And yet there’s no clear formal screen or standard for our profession on the use of AI that will help us to distinguish what might be helpful for families and what is not. It’s also why we should be stepping up efforts to create smaller, personalized, community-based relational approaches for our practice and the systems of support that keep kids safe and boost family well-being as a counterbalance.
This slow creep of technology began many years ago, when social service agencies started greeting clients with automated response systems. Those answering machines were the AI equivalent of what Pong was to today’s video games. Aside from the convenience for the agency, it seemed inappropriate on so many levels. The initial engagement process with clients is critical and makes a lasting impression. It was a subtle reminder that those with power can use that power to control as much of the process as possible — and it could happen again with an uninformed AI.
In the meantime, the state of tech and AI has evolved exponentially. This might lead us to ask ourselves a few questions about the appropriate use of technology so it passes a family-friendly test.
Do we want to follow other professions whose use of AI is alienating their consumers? Our profession is among the last where personal contact and relationship-building is still seen as essential. Our peers in medicine have seen their connections to patients become so depersonalized and mechanized that the notion of bedside manners is archaic. Their profession has become overrun with technology and corporate influences, leading to a great deal of professional alienation and dissatisfaction among patients.
Professionals in economic support programs like food stamps, Medicaid and subsidized housing offer their clients not much more than forms and check-box questions. Many offices refer to their employees as “clerks” whose job it is to navigate clients through the application process in systems that have become all too mechanized. Ironically, this works against much of what we are trying to do in developing a more holistic approach to the social, emotional and physical well-being of those whom we serve.
What do we gain from further eroding our already tenuous connection to communities? If we bolster our local efforts with more personalized approaches to service delivery, we reduce our need for technology that is built for a bigger scale.
What should we do while we wander through the unknown? Part of my caution is wrapped up in the adage that trust is won in drops and lost in buckets. For a profession that trades on trust, how much control are we willing to cede to flawed, impersonal tools whose mistakes can change the course of a family’s life?
Think about the ongoing controversy related to jurisdictions utilizing algorithms for assessing risk and safety, including the legitimate concerns about racial, class and disability bias. How do we expect that AI will create trusting relationships with families when by all accounts, we can’t assure the absolute objectivity of these tools? Furthermore, for decades the standard of the helping relationship has been the opportunity for the caseworker and the family to go deeper in their knowledge of each other. There is no tool that can do that.
In a brilliant article in The Atlantic, author Adrienne LaFrance writes, “We should resist over-reliance on tools that dull the wisdom of our own aesthetics and intellect.” Our profession has made plenty of mistakes over the decades, but we moved to right ourselves through wisdom, compassion and a commitment to social justice.
Is a full-scale infusion of AI the best way to spend our limited resources? Make no mistake, part of what will drive the decision to open the door wider to AI technology in child welfare agencies will be the financial incentives for consulting firms and technology companies whose bottom-line benefits from our saying yes.
They are there to sell us. Is anyone else watching the current valuation of tech company stocks? Does anyone remember what we spent on Statewide Automated Child Welfare Information Systems, and what some states are paying now to replace them? How many misfires did we have while agencies burned through millions of dollars bringing windfalls to the Fortune 500 and very little to families.
And what if we rush toward a future that seemingly makes us more efficient and promises to cover those roles for which we can find no staff? On the surface, it will seem like a value-added for an organization. One administrator with whom I recently spoke was excited because his team members were getting individualized virtual training that resembled some sort of video game.
Very cool? Yes. Expensive and trendy? Also yes. Drastically reducing our ability to pay front-line team members and nonprofit agencies more? Yes again.
Let’s be certain what we are paying for when we are offered the fancy new tools. Invest in AI and technology that is driven by our practice, not the other way around. Resist the over-engineered gizmos that might seem fascinating to us, but not to families in crisis.
Finally, how can we use real intelligence to create and scale up effective and common-sense, community-based models that prioritize family safety and family support? Our dilemma lies in our subpar customer service approach— the result of outdated and inconsistent practice models — as well as faulty assumptions about families and the inability to connect to those in isolation who have the highest level of challenge. Do we think AI or related technology will help with that?
Any technology we choose should be done in conjunction with families, who should both understand and value tools that actively assist them in accomplishing their goals of self-sufficiency, safety and stability — and enhancing the helping experience. Centralizing records to avoid duplicative and repetitive questions with multiple systems comes to mind as an example.Streamlining the work doesn’t make it better. Efficiency isn’t necessarily effective. Let’s not get distracted from our current desire to build what Casey Family Programs calls Communities of Hope. The more personal our approach, the more likely we are to be taken seriously by those whom we serve.