Skip to main content
Ask the ExpertPeopleSpiegeloog 420: Taboo

Ask the Expert: How do racial and gender biases in AI come to be?

By September 30, 2022January 23rd, 2024No Comments
Claire Stevenson (Psychological Methods)
David Amodio (Social Psychology)
Claire Stevenson’s (Psychological Methods) question:

Dear David,

You are a social neuroscientist who investigates the psychological and neural bases of prejudice. You also apply this knowledge to the field of AI – how do racial or gender biases in, for example, facial recognition software or generative language models like GPT-3 come to be? Of course some of these biases are due to the data used to train these AI models. How do you think the shift in AI to create few-shot learners, i.e. that try to generalize from only a handful of training examples, will affect AI models in terms of biased output?

Claire

David Amodio’s (Social Psychology) answer:

Dear Claire,

Thanks for your question! Indeed, for 20 years, I’ve studied how racial prejudice operates in the mind, brain, and behavior, but recently, I’ve become extremely interested in how prejudice can operate in artificial intelligence (AI) and have potentially drastic effects on society.

On the surface, ‘algorithmic bias’ sounds like an oxymoron: if an algorithm merely reflects the data, how could it possibly be biased? Yet mounting evidence reveals that algorithms used in criminal justice, health care, and education, among other areas, seem to recapitulate prejudices typically found in human decision making.

Despite the perception that algorithms are accurate reflections of reality, nearly every aspect of their creation, implementation, and consumption is guided by human decisions, and thus vulnerable to human social and cognitive biases. Furthermore, because these human decision points are often obscured, biased algorithmic outputs may be utilized unwittingly and without correction—a process that can result in the propagation existing societal prejudices and inequities.

Although there’s been lots of research and commentary on the presence of bias in AI and its impact on society, little is known about how exactly human prejudices make their way into algorithms. This is an area where I think psychology—especially social cognition—may be critical.

My lab has been addressing these issues in a few ways. In one project, we showed that gender bias in Google image search reflects the degree of gender inequality in a society, and that exposure to these gender-biased search outputs leads people to think and act in ways that reinforce this inequality (Vlasceanu & Amodio, 2022). This work demonstrates a troubling cycle of bias propagation between society, AI, and users.

In other work, we’re tracing the sources of human racial bias in face recognition algorithms, from the creation and selection of face training sets and the way models are trained and evaluated, to how human users consume algorithmic outputs and use them in decision making.

And in another line of work, we’re examining sources of bias in natural language processing models—the kinds of AI that drive internet search engine results, auto-completion, and computer-generated text. These models are trained on massive collections of digitized text, which—surprise!—are imbued with prejudices and stereotypes of humans writers. We’re

looking at how decisions regarding which texts to digitize and select for training sets, and the historical periods from which they come, can influence the degree of bias in the algorithm.

Broadly, we’re trying to build a psychology of algorithmic bias that can interface with approaches to the topic from fields like sociology, communications, and computer science.

David

David Amodio’s question is for Astrid Homan (Work and Organisational Psychology)

Dear Astrid,

In recent years, the lack of diversity in academia has received growing attention, relating to broader concerns about equity and diversity in society. Yet many people in academia—especially those in power—seem reluctant to take serious action. Based on your research, what would you say to such a person? What has your research taught us about the benefits of diversity to organizations, in addition to its benefits for society and members of underrepresented groups?

David

Claire Stevenson’s (Psychological Methods) question:

Dear David,

You are a social neuroscientist who investigates the psychological and neural bases of prejudice. You also apply this knowledge to the field of AI – how do racial or gender biases in, for example, facial recognition software or generative language models like GPT-3 come to be? Of course some of these biases are due to the data used to train these AI models. How do you think the shift in AI to create few-shot learners, i.e. that try to generalize from only a handful of training examples, will affect AI models in terms of biased output?

Claire

David Amodio’s (Social Psychology) answer:

Dear Claire,

Thanks for your question! Indeed, for 20 years, I’ve studied how racial prejudice operates in the mind, brain, and behavior, but recently, I’ve become extremely interested in how prejudice can operate in artificial intelligence (AI) and have potentially drastic effects on society.

On the surface, ‘algorithmic bias’ sounds like an oxymoron: if an algorithm merely reflects the data, how could it possibly be biased? Yet mounting evidence reveals that algorithms used in criminal justice, health care, and education, among other areas, seem to recapitulate prejudices typically found in human decision making.

Despite the perception that algorithms are accurate reflections of reality, nearly every aspect of their creation, implementation, and consumption is guided by human decisions, and thus vulnerable to human social and cognitive biases. Furthermore, because these human decision points are often obscured, biased algorithmic outputs may be utilized unwittingly and without correction—a process that can result in the propagation existing societal prejudices and inequities.

Although there’s been lots of research and commentary on the presence of bias in AI and its impact on society, little is known about how exactly human prejudices make their way into algorithms. This is an area where I think psychology—especially social cognition—may be critical.

My lab has been addressing these issues in a few ways. In one project, we showed that gender bias in Google image search reflects the degree of gender inequality in a society, and that exposure to these gender-biased search outputs leads people to think and act in ways that reinforce this inequality (Vlasceanu & Amodio, 2022). This work demonstrates a troubling cycle of bias propagation between society, AI, and users.

In other work, we’re tracing the sources of human racial bias in face recognition algorithms, from the creation and selection of face training sets and the way models are trained and evaluated, to how human users consume algorithmic outputs and use them in decision making.

And in another line of work, we’re examining sources of bias in natural language processing models—the kinds of AI that drive internet search engine results, auto-completion, and computer-generated text. These models are trained on massive collections of digitized text, which—surprise!—are imbued with prejudices and stereotypes of humans writers. We’re

looking at how decisions regarding which texts to digitize and select for training sets, and the historical periods from which they come, can influence the degree of bias in the algorithm.

Broadly, we’re trying to build a psychology of algorithmic bias that can interface with approaches to the topic from fields like sociology, communications, and computer science.

David

David Amodio’s question is for Astrid Homan (Work and Organisational Psychology)

Dear Astrid,

In recent years, the lack of diversity in academia has received growing attention, relating to broader concerns about equity and diversity in society. Yet many people in academia—especially those in power—seem reluctant to take serious action. Based on your research, what would you say to such a person? What has your research taught us about the benefits of diversity to organizations, in addition to its benefits for society and members of underrepresented groups?

David

Spiegeloog Editors

Author Spiegeloog Editors

Spiegeloog editorial staff.

More posts by Spiegeloog Editors