Rep. Alexandria Ocasio-Cortez, speaking at an event in January 2019 honoring the legacy of Dr. Luther King, said, “Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions. They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.” Though her comments were correct—algorithms can indeed reflect and exhibit human bias—Rep. Ocasio-Cortez’s framing of the intersection of algorithms and fairness highlighted an often-ignored issue in progressive politics. The political movement, defined in part by its commitment to social justice, is unsurprisingly critical of the potential for algorithms, particularly AI, to facilitate discrimination, yet seemingly pays little attention to the ways in which algorithms can actually reduce discrimination. Rather than calling for the technology to be avoided, progressives should be pushing for greater use of AI to promote equity and fairness.
There are countless examples. For example, in 2013, the Consumer Financial Protection Bureau (CFPB) found that Ally Financial was engaging in loan discrimination, charging African American, Hispanic, Asian, and Pacific Islander borrowers between $200 and $300 more for car loans than white borrowers. CFPB found that this occurred because workers at car dealerships were, either unconsciously or deliberately, offering minority customers interest rates notably higher than the rate recommended by Ally Financial’s algorithm. Neither the car dealership workers nor Ally Financial’s algorithm were legally permitted to use data about race to determine loan pricing, yet some humans did anyway, while the algorithm’s interest rates were based on objective criteria such as creditworthiness. Had car dealerships been prevented from marking up the algorithmically-generated interest rates, hundreds of thousands of minority borrowers would have not been exploited because of their race.
Algorithms also provide the opportunity to increase fairness in society more broadly. For example, ankle-worn GPS monitors are commonly used in the United States as part of monitoring agreements for people on parole, probation, or house arrest. However fees and stigma associated with ankle monitor use can make the practice take a heavy financial and emotional toll, with one person assigned to wear a monitor criticizing the device as “a rope around my neck.” Here algorithms can help, too. E-identity company OptimumID has piloted its BridgeID software, which uses a smartphone app,facial recognition and identify verification algorithms, in Bell County, Texas to supplement ankle monitors to track the location of people on parole, probation, or house arrest without the stigma of ankle bracelets and at significantly reduced cost. And Aware, a biometrics company, has developed a similar technological solution. Such algorithm-driven approaches that replace ankle monitors with virtual check-ins were made possible thanks to recent federal legislation to reduce recidivism called the First Step Act.
There are many such examples of algorithms being used to achieve progressive goals that would be impossible, if not extremely difficult, to achieve with just human decision-making. Over the past several months, California judges and attorneys have been clearing the criminal records of tens of thousands convicted of cannabis-related offenses using a tool called Clear My Record that uses algorithms to analyze court files and scanned documents. While people with cannabis convictions can have their records expunged in California, only a very small percentage of people pursue this, and seeking expungement is a daunting and time-consuming manual process. Clear My Record allowed attorneys to identify and proactively clear the records of eligible criminal records in minutes, increasing social and economic opportunities for thousands of Californians.
Progressives are absolutely right to continue to highlight cases where organizations are using algorithms that reflect or amplify societal biases. Yet all too often, the explicit or implied conclusion of these conversations is to avoid algorithms until society manage to eradicate these biases. Instead, progressives should recognize that simply avoiding algorithms does nothing to reduce these biases that permeate all aspects of human decision-making and maintains the status quo, leaving society no better off. Vast potential exists to use algorithms to eradicate these biases more quickly than social justice activism alone, and progressives should recognize that pursuing this potential is worthy of their political capital.
As progressive politicians push policies that support the perspective that algorithms are inherently riskier to social justice than human decisions, they should be aware that this approach undermines progressive values and that the solution to algorithmic bias is not less algorithms, but better algorithms.