Data Isn't Objective

An Algorithm Is Flagging Black Homes For Child Welfare Investigations

The screening tool is being used by child welfare services to find neglect, but social workers disagree with its results one-third of the time.

by Maggie Clancy
A woman looking concerned at her computer. AP just released data of an algorithm used by child servi...
AerialPerspective Images/Moment/Getty Images

Algorithms rule everything around us, and now they have a growing hand in deciding which families child welfare services investigates for neglect. Even though they are far from perfect and even though researchers have found they are showing racial bias.

It’s far from the first time that algorithms that are supposed to help end up wreaking havoc. From producing the infamous echo chambers of the 2016 elections to targeted Instagram ads to what keywords media outlets are allowed to monetize, algorithms have a heavy hand in framing the information we consume, internalize, and turn into our own set of biases.

Associated Press’ latest installment of its series Tracked, which “investigates the power and consequences of decisions driven by algorithms on people’s everyday lives,” just illustrated another powerful and inherently problematic aspect of machine learning algorithms, and it’s unfairly targeting kids and families of certain races.

According to new research from Carnegie Mellon University obtained exclusively by AP, a predictive algorithm used by child welfare services in Allegheny County, Pennsylvania, showed a pattern of flagging a disproportionate number of Black children for “mandatory” neglect investigation, compared to their white counterparts.

The researchers also discovered that social workers who investigated these flagged cases disagreed with the risk assessment produced by the algorithm, called the Allegheny Family Screening Tool (AFST), a whopping one-third of the time.

That is to say, if this algorithm were to receive a grade, it would be a 67% — a D+.

It’s difficult to say what about the algorithm is problematic. As Vox’s Rebecca Heiliweil noted in Why algorithms can be racist and sexist, it’s nearly impossible to see which part of an algorithm’s initial coding made it susceptible to producing and rapidly replicating bias.

“Typically, you only know the end result: how it has affected you, if you’re even aware that AI or an algorithm was used in the first place,” Heilweil notes.

The same is true in the case of the Allegheny algorithm. There is no transparent way for the public to see what factors carry more weight than others in this algorithm designed to detect cases of child neglect. (The algorithm is not used in cases of physical or sexual abuse, which is investigated in a separate manner.)

The algorithm focuses on “everything from inadequate housing to poor hygiene,” a nebulous term that could, in theory, include data sets on everything from how often a child brushes their teeth to whether or not the child has a set bedtime.

It also uses an alarming amount of personal data collected from birth, like Medicaid records, substance abuse histories, and jail and probation records. These data are already primed for racial bias, given that they come from institutions steeped in white supremacy, like the carceral system.

The importance of these individual factors isn’t decided by some objective, non-biased computer: they are decided by the programmers, who are, in fact, very human and each come with their own set of inherent biases.

This has tech accountability advocates worried, since machine learning algorithms can make a lot of decisions very quickly, and therefore not only replicate, but exacerbate economic, social, and racial injustice. If algorithms like AFST are used without humans double-checking, a lot of mistakes can be made, quickly and irrevocably for many impacted by them.

According to Public Citizen, a nonprofit consumer advocacy organization, algorithm bias, or “black box algorithms” have very real impacts on people of color in nearly every facet of their lives. For example, communities of color pay 30% more for car insurance than white communities with similar accident costs, thanks to a predictive algorithm. Social media apps like TikTok and Instagram have been lambasted by Black creators whose content is often erroneously taken down by the platforms’ algorithms.

These black box algorithms are like the part of Fantasia when Mickey serves as the sorcerer’s apprentice. To make his task quicker, Mickey decided to program a broom do his job and bring buckets of water to the well, just like we do with machine learning algos.

The broom quickly learned the task, made more of itself to become more efficient, and continued to replicate the task with increasing speed. Left unchecked, it led to a destructive mess.

Algorithms like the one used in Allegheny County by child welfare services are also being used in other locations across the country — and it’s very possible they have the same flaws and tendencies. The results could do more harm than good.