By Jessica Guynn

USA TODAY, April 16, 2019.

SAN FRANCISCO — Facial recognition systems frequently misidentify people of color. Lending tools charge higher interest rates to Hispanics and African Americans. Sentencing algorithms discriminate against black defendants. Job hunting tools favor men. Negative emotions are more likely to be assigned to black men’s faces than white men. Computer vision systems for self-driving cars have a harder time spotting pedestrians with darker skin tones.

The use of artificial intelligence, which combs through vast amounts of our personal data in search of patterns, is rapidly expanding in critical parts of Americans’ daily lives such as education, employment, health care and policing. Increasingly, powerful artificial intelligence tools determine who gets into school, who gets a job, who pays a higher insurance premium.

Yet a growing body of research shows that these technologies are rife with bias and discrimination, mirroring and amplifying real-world inequalities. A study scheduled to be released Wednesday by New York University’s AI Now Institute identifies a key reason why: The people building these technologies are overwhelmingly white and male.

Artificial intelligence technologies are developed mostly in major tech companies such as Facebook, Google, Amazon and Microsoft, and in a small number of university labs, which all tilt white, affluent and male and, in many cases, are only getting more so. Only by adding more women, people of color and other underrepresented groups can artificial intelligence address the bias and create more equitable systems, says Meredith Whittaker, a report author and co-founder of the AI Now Institute.

“The problem of a lack of diversity in tech is obviously not new but it’s reached a new and urgent inflection point. The number of women and people of color in the AI sector has decreased at the same time that the sector is establishing itself as a nexus of wealth and power,” Whittaker says. “In short, the problem here is that those in the room when AI is built, and those who are benefiting from the rapid proliferation of AI systems, represent an extremely narrow segment of the population. They are mainly men, they are mainly technically educated and they are mainly white. This is not the diversity of people that are being affected by these systems.”

Both within the spaces where AI is being created, and in the logic of how AI systems are designed, the costs of bias, harassment, and discrimination are borne by the same people: gender minorities, people of color, and other underrepresented groups. Similarly, the benefits of such systems, from profit to efficiency, accrue primarily to those already in positions of power, who again tend to be white, educated, and male,” the NYU study, a year in the making, found. 

The study, “Discriminating Systems: Gender, Race, and Power in AI,” comes as scrutiny of AI intensifies. 

Massachusetts Institute of Technology facial recognition researcher Joy Buolamwini on Feb. 13, 2019, at the school, in Cambridge, Mass. Her research has uncovered racial and gender bias in facial analysis tools sold by companies such as Amazon that have a hard time recognizing certain faces, especially darker-skinned women. Buolamwini holds a white mask she had to use so that software could detect her face. (Photo: Steven Senne, AP)

For years, tech companies could not deliver on the industry’s ambitious promises of what hyper-intelligent machines could do. Today, AI is no longer the stuff of science fiction. Machines can recognize objects in a photograph or translate an online post into dozens of languages. And they are getting smarter all the time, taking on more sophisticated tasks.

Tech companies, AI researchers and industry groups cast AI in a positive light, pointing to the possibility of advances in such critical areas as medical diagnosis and personalized medicine. But as these technologies proliferate so, too, are  alarm bells. 

People often think of computer algorithms and other automated systems as being neutral or scientific but research is increasingly uncovering how AI systems can cause harm to underrepresented groups and those with less power. Anna Lauren Hoffmann, an assistant professor with The Information School at the University of Washington, describes this as “data violence,” or data science that disproportionately affects some more than others. 

The NYU researchers say machines learn from and reinforce historical patterns of racial and gender discrimination.

Last year, Amazon had to scrap a tool it built to review job applicants’ resumesbecause it discriminated against women. Earlier this month, more than two dozen AI researchers called on Amazon to stop selling its facial recognition technology to law enforcement agencies, arguing it is biased against women and people of color.

Google’s speech recognition software has been dinged for performing better for male or male-sounding voices than female ones. In 2015, Google’s image-recognition algorithm was caught auto-tagging pictures of black people as “gorillas.” Last year, transgender drivers for Uber whose appearances had changed were temporarily or permanently suspended because of an Uber security feature that required them to take a selfie to verify their identity.

Other companies use AI to scan employees’ social media for “toxic behavior” and alert their bosses or analyze job applicants’ facial movements, tone of voice and word choice to predict how well they would do the job. Predictim analyzes online activities to produce ratings of which babysitters are more likely to abuse drugs or bully. 

Joy Buolamwini, The Gender Shades Project pilots an intersectional approach to inclusive product testing for AI. MIT Media Lab

Leading the charge in raising awareness of the dangers of bias in AI is Massachusetts Institute of Technology researcher Joy Buolamwini, who with her research and advocacy has prompted Microsoft and IBM to improve their facial recognition systems and has drawn fire from Amazon, which has attacked her research methodology. Her work has also caused some in Congress to try to rein in the largely unregulated field as pressure increases from employees at major tech companies and the public. 

Last week, Democratic lawmakers introduced first-of-their-kind bills in the Senate and the House that would require big companies to test the “algorithmic accountability” of their artificial intelligence systems such as facial recognition. The bills were introduced just weeks after Facebook was sued by the Department of Housing and Urban Development, which has charged the social media giant’s targeting system with allowing advertisers to exclude protected groups from seeing housing ads.

San Francisco is considering banning city agencies from using facial recognition. Privacy laws in Texas and Illinois require anyone recording biometric data, including facial recognition, to give people notice and obtain their consent. The Trump administration has made developing “safe and trustworthy” algorithms one of the key objectives of the White House’s AI initiative.

The NYU researchers say it’s critical for AI to diversify the homogeneous group of engineers and researchers building these automated systems. Yet the gender gap in computer science is widening. 

As of 2015, women made up 18 percent of computer science majors in the U.S., down from a high of 37 percent in 1984. Women make up less than one quarter of the computer science workforce and receive median salaries that are 66 percent of their male counterparts, according to the National Academies of Sciences, Engineering, and Medicine. The number of bachelor’s degrees in engineering awarded to black women declined 11 percent between 2000 and 2015.

The problem is even more acute in AI. Most speakers and attendees of machine learning conferences and 80 percent of AI professors are men, research shows. Women account for 15 percent of AI research staff at Facebook and 10 percent at Google. While there is very little public data on racial diversity in AI, anecdotal evidence suggests that the gaps are even wider, the study says.

Last month when Stanford University unveiled an artificial intelligence institute with 120 faculty and technology leaders to represent humanity, not a single one was black. Boards created by tech companies to examine the ethics of artificial intelligence also lack members from underrepresented groups.

Google also announced an “external advisory council” on AI ethics last month. NAACP president and CEO Derrick Johnson complained the new body “lacks a qualified member of the civil rights community.” “This is offensive to people of color & indicates AI tech wouldn’t have the safeguards to prevent implicit & racial biases,” he wrote on Twitter. Google later scrapped the advisory council. 

Both within the spaces where AI is being created, and in the logic of how AI systems are designed, the costs of bias, harassment, and discrimination are borne by the same people: gender minorities, people of color, and other underrepresented g


roups. Similarly, the benefits of such systems, from profit to efficiency, accrue primarily to those already in positions of power, who again tend to be white, educated, and male,” the NYU study, a year in the making, found. 

Current efforts to attract and retain underrepresented groups in AI are not cutting it, the study warned.

The push to bring more women into tech is too narrow and is “likely to privilege white women over others.” Arguments focused on a recruiting or “pipeline” problem ignore pernicious issues in corporate and university work cultures — power imbalancesharassmentexclusionary hiring practices and unequal compensation — that drive women and people of color from AI or dissuade them from joining the field in the first place, researchers say.

Among the study’s recommendations: publish compensation levels broken down by race and gender and end pay and opportunity inequality; produce harassment and discrimination transparency reports; change hiring practices to increase diversity and the number of people of color, women and other underrepresented groups at senior leadership levels and create pathways for contractors, temps and vendors, who tend to be from more diverse backgrounds, to become full-time employees; and ensure executive incentives are tied to increases in hiring and retention of underrepresented groups.

“To tackle the diversity crisis and to address AI bias, we need to look beyond technical fixes for social problems. We need to look at who has power, we need to ask who is harmed, we need to look at who benefits and we need to look at, ultimately, who gets to decide how these tools are built and which purposes they serve,” Whittaker says. “If the AI industry wants to change the world then it needs to get its own house in order first.”

Read more on tech’s lack of diversity: Inequity in Silicon Valley