Home PublicationsCommentary Why Are Child Welfare Advocates Sabotaging Data-Driven Efforts to Protect Children?

Why Are Child Welfare Advocates Sabotaging Data-Driven Efforts to Protect Children?

by Joshua New
by
Childcare

Of all the social challenges the public sector works to overcome, ensuring the wellbeing of children is undoubtedly among the most important. Consider that in 2014, 702,000 children were abused or neglected in the United States and 1,580 children died as a result. So when Los Angeles County announced in July 2015 that it would begin using predictive analytics to help its social workers quickly identify the most at-risk children and help the county prioritize its efforts to deliver services more efficiently, child welfare advocates should have rejoiced.

Instead, this and similarly promising approaches have encountered substantial opposition from a surprising number of advocates who worry that using data to assist with child welfare investigations would perpetuate racial discrimination, violate human rights, and ultimately cause more harm than good. Not only are these critics fundamentally wrong, but their resistance to using data to improve outreach and assistance is quite likely jeopardizing the wellbeing of children.

The data analytics tool that L.A. County is piloting, called the Approach to Understanding Risk Assessment (AURA), is straightforward: AURA automatically aggregates data from various city agencies, including the departments of health, education, and corrections, to calculate a score from 0 to 1000 to indicate the level of risk a child faces based on factors known to be correlated with abuse.

For example, if AURA detects that a child makes frequent emergency room visits, changes schools, and lives with a family member with a history of drug abuse, AURA will warn county social workers that he or she has an elevated risk score. In the process of creating AURA, the software developer SAS tested its algorithms on historical data and found that if L.A. County had implemented the technology in 2013, AURA would have flagged 76 percent of cases that resulted in a child’s death with a very high risk score, which could have prompted an investigation that may have helped prevent a tragedy. Social workers already analyze this data in their investigations, but they must collect it manually and then use their own judgement to determine whether or not to launch an investigation.  

One of the most common criticisms of this approach is that automating risk analysis could promote racial discrimination, such as by facilitating racial profiling by social workers. However, automating this process is actually one of AURA’s biggest benefits as it reduces the potential for biased, subjective human decision-making to enter the equation. In fact, many jurisdictions require social workers to manually log data they think is relevant to a case—a system easily manipulated by social workers who may choose to enter only the data that will produce their desired outcome, or by abusive caregivers, as these systems rely heavily on self-reported data rather than data from official sources. By automatically aggregating the information, county officials can better ensure that social workers rely on objective, complete assessments to guide their investigations.

Other criticism demonstrates a fundamental misunderstanding of how the technology actually works, likening it to the dystopian sci-fi thriller Minority Report, in which police officers can arrest people when psychics predict they will commit a crime. For example, Richard Wexler, executive director of the nonprofit National Coalition for Child Protection Reform, argues these systems lead to a dramatic increase in “needless investigations and needless foster care,” causing more harm to children than would be prevented. This line of criticism overlooks the fact that AURA and other systems like it are merely decision support systems—that is, software designed to help people do their jobs more effectively by making more informed decisions—not software that is intended to replace human decision-making.

In this case, social workers still would be the ones making real-world judgments about the best interests of at-risk children. AURA does not initiate an investigation when a child’s risk score hits a certain point, it simply replaces the information gathering and analysis that social workers already perform for every investigation, and does so faster and more comprehensively than a human ever could. If Wexler is concerned that social workers conduct needless investigations, he should be advocating for more and better analytics, not less.  

Fear of using data to support decision-making is not new. In fact, it has already prevented at least one effort to use this type system to improve child-protection efforts. Last year in New Zealand, Social Development Minister Anne Tolley blocked an initiative to study the accuracy of a system similar to AURA on the grounds that “children [participating in the study] are not lab rats.” The study would have assigned risk scores to newborns and monitored outcomes after two years so researchers could ensure the model was reliable.

Tolley objected to the fact that social workers could not act on these scores as it would skew the outcome of the study, but by that logic Tolley should also object to clinical drug trials, which require rigorous, untampered testing before a drug can be approved for the public. Tolley incorrectly assumed social workers would have to stand by and watch children be abused just so the predictive model could be verified. The truth was that standard intervention procedures still would have been in effect.

Surprisingly, much of the opposition has come from child welfare advocates, such as Wexler, Tolley, and the director of the L.A.-based community activist organization Project Impact. Some of the most vocal opposition of New Zealand’s attempt to test this approach came from Deborah Morris-Travers, New Zealand advocacy manager for the United Nations Children’s Fund (UNICEF). Morris-Travers said that calculating risk scores for newborns and monitoring them to see if these scores were reliable somehow constituted a “gross breach of human rights.”

But Morris-Travers’ concern is misplaced. The gross breach of human rights is the child abuse that is occurring, and refusing to explore how predictive analytics could help social workers better understand and curb the problem does a terrible disservice to the victims. Morris-Travers’ comments are particularly confounding considering that UNICEF directly credited the increased use of data and analytics as the reason it has been able to make so much progress in helping children. In fact, UNICEF’s 2014 report, “The State of the World’s Children,” clearly states that “[c]redible data about children’s situations are critical to the improvement of their lives—and indispensable to realizing the rights of every child.”

If these advocates want to prevent child abuse, they should be championing innovative efforts and technologies that show great potential to do so, not fighting them. Of course, child welfare agencies should closely monitor these programs’ effectiveness, not implement them blindly. If testing reveals that these systems are ineffective or detrimental, then policymakers should of course seek alternate strategies or work to improve them. But given the scale of the need and opportunity to improve children’s welfare, slowing experimentation with predictive analytics would be incredibly detrimental. As an increasing number of government officials recognize the potential of this approach, they should be careful not to give credence to advocates more fearful of data than they are concerned about the welfare of children.

This article originally appeared in The Chronicle of Social Change.

Image: U.S. Department of Agriculture

You may also like

Show Buttons
Hide Buttons