Capturing Fluidity: Gender Bias in AI and Empathy Standards

Capturing Fluidity: Gender Bias in AI and Empathy Standards

A team of research engineers, scientists, and librarians from the Los Alamos National Laboratory’s Research Library are tackling the human problems that come with machine learning. More specifically, they are innovating Natural Language Processing systems that create predictive text and solve other language assignments. Supervised machine learning, as explained by Powell et al., is the use of pre-approved database sets of text that already have discernible patterns to teach AI with, commonly pulled from decades worth of websites and newspapers such as the New York Times. Using such large datasets of historical material results in the conglomeration of several decades of language and cultural biases. Although we live in a current cultural belief system surrounding gender, the usage of this historical material continues to perpetuate old ideas.

The Los Alamos team aimed to mitigate gender biases between words on a word embedding visualization, or 3D spatial plane, that plots terms by their association to each other. If the angle between two points was measured, the resulting number could determine their level of similarity (with acute angles meaning they were more associated with each other). Through their web-based application called Word Embedding Navigator (WEN), the team provided a tool to individually move plot points farther or closer to one another to reduce bias in post-processing. For example, plot points of traditionally female and male names were neutralized in distance from traditional gender stereotypes, such as cooking and cleaning, to even out the association between gender and activities.

Figure 1a from “Teaching AI when to care about gender” states: “Principal component plots for the terms art, science, and religion showing their nearest neighbors in the New York Times corpus.”

As information professionals must navigate capturing fluid language and cultural belief, this project showcases adaptive tools to ensure the neutralization of biases within AI systems. Although it may seem in the public eye that AI creates new information, it is more realistic to say that AI is trained in batches of already-existing data with its own pre-existent biases to perform tasks. If harmful biases are taught to an AI system, then those biases perpetuate through the AI process. In other words, AI only knows what you teach it.

The Los Alamos project points to the need to not only combat biases in AI systems, but acknowledge biases in the professionals working these systems so that they are able to effectively neutralize biases in post-processing. As WEN is a very subjective process of picking out words to neutralize, it brings along with it more opportunity for bias to arise through the post-processing. As we create more ethical systems of information, we dispel harmful ideas surrounding what gender should be, allowing us to realistically cover the broad scope of human experience and expression. Through easy-to-handle tools of neutralization and emphasis on self-acknowledgment of bias, information professionals can be better equipped to produce neutral information systems. 

The team’s project revealed the extent of care and human intervention that must be applied to AI development, pushing for what I would like to empathy standards. This research to neutralize gender biases can extend further to other social identities, such as race, class, and sexuality stereotypes that perpetuate harmful ideas about groups of people. Without the cultural and political acknowledgement of people’s identities we are working with, we fail to mitigate harm that is created in that ignorance. As Powell et al. points out, WEN is primarily successful for smaller-scale databases and is currently used as an exploratory tool. Although many larger organizations and corporations may churn out datasets and visualizations without taking into account biases, the development of WEN shows the extent to which bias still exists within our language and technology, even if it cannot be used currently on a large-scale.

These developments from the Los Alamos National Laboratory brings to mind further questions surrounding AI biases. Although it is useful to be able to edit word embedding visualizations to mitigate biases, this current model of post-process editing seems to need meticulous labor and time dedicated to it by having to move every single point on the map. Are there possibilities for bias neutralization during processing? In addition, how does this project extend to trans and nonbinary identities along with complex intersections of race, disability, class, etc.? Powell et al. acknowledges this absence in the study as inherent to the objective nature of science:

“References to binary gender identities and to gender stereotypes are based on historical data and are referenced in order to provide clear examples for the purpose of potential bias mitigation techniques applied to data consumed by machine learning algorithms.”

WEN points to the harsh reality that data is an object, even when dealing with real lives. But, its uses to promote empathy standards provides a stepping stone to acknowledging the wide scope of human experience.

Powell, James, et al. “Teaching AI When to Care about Gender.” The Code4Lib Journal, no. 54, 29 Aug. 2022. https://journal.code4lib.org/articles/16718.

This reflection was written for INFO 654: Information Technologies, taught by Meg Wacha.