By Will Knight
MIT Technology Review, July 12, 2017 —
The big companies developing them show no interest in fixing the problem.
Opaque and potentially biased mathematical models are remaking our lives—and neither the companies responsible for developing them nor the government is interested in addressing the problem.
This week a group of researchers, together with the American Civil Liberties Union, launched an effort to identify and highlight algorithmic bias. The AI Now initiative was announced at an event held at MIT to discuss what many experts see as a growing challenge.
Algorithmic bias is shaping up to be a major societal issue at a critical moment in the evolution of machine learning and AI.
If the bias lurking inside the algorithms that make ever-more-important decisions goes unrecognized and unchecked, it could have serious negative consequences, especially for poorer communities and minorities.
The eventual outcry might also stymie the progress of an incredibly useful technology (see “Inspecting Algorithms for Bias”).
Algorithms that may conceal hidden biases are already routinely used to make vital financial and legal decisions. Proprietary algorithms are used to decide, for instance, who gets a job interview, who gets granted parole, and who gets a loan.
The founders of the new AI Now Initiative, Kate Crawford, a researcher at Microsoft, and Meredith Whittaker, a researcher at Google, say bias may exist in all sorts of services and products.
“It’s still early days for understanding algorithmic bias,” Crawford and Whittaker said in an e-mail. “Just this year we’ve seen more systems that have issues, and these are just the ones that have been investigated.”
Examples of algorithmic bias that have come to light lately, they say, include flawed and misrepresentative systems used to rank teachers, and gender-biased models for natural language processing.
Cathy O’Neil, a mathematician and the author of Weapons of Math Destruction, a book that highlights the risk of algorithmic bias in many contexts, says people are often too willing to trust in mathematical models because they believe it will remove human bias. “[Algorithms] replace human processes, but they’re not held to the same standards,” she says. “People trust them too much.”
A key challenge, these and other researchers say, is that crucial stakeholders, including the companies that develop and apply machine learning systems and government regulators, show little interest in monitoring and limiting algorithmic bias. Financial and technology companies use all sorts of mathematical models and aren’t transparent about how they operate. O’Neil says, for example, she is concerned about how the algorithms behind Google’s new job search service work.
O’Neil previously worked as a professor at Barnard College in New York and a quantitative analyst at the company D. E. Shaw. She is now the head of Online Risk Consulting & Algorithmic Auditing, a company set up to help businesses identify and correct the biases in the algorithms they use. But O’Neil says even those who know their algorithms are at a risk of bias are more interested in the bottom line than in rooting out bias. “I’ll be honest with you,” she says. “I have no clients right now.”
O’Neil, Crawford, and Whittaker all also warn that the Trump administration’s lack of interest in AI—and in science generally—means there is no regulatory movement to address the problem (see “The Gaping, Dangerous Hold in the Trump Administration”).
“The Office of Science and Technology Policy is no longer actively engaged in AI policy—or much of anything according to their website,” Crawford and Whittaker write. “Policy work now must be done elsewhere.”