TY - GEN
T1 - Gender Biases in Tone Analysis
T2 - 2023 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, EAAMO 2023
AU - Yeung, Christina
AU - Iqbal, Umar
AU - Kohno, Tadayoshi
AU - Roesner, Franziska
N1 - Publisher Copyright:
© 2023 Owner/Author.
PY - 2023/10/30
Y1 - 2023/10/30
N2 - In addition to being a health and fitness band, the Amazon Halo offers users information about how their voices sound, i.e., their 'tones'. The Halo's tone analysis capability leverages machine learning, which can lead to potentially biased inferences. We develop an auditing framework to evaluate the Amazon Halo's tone analysis capabilities for gender biases. Our results show that the Halo exhibits statistically significant gender biases, when the same emotion is conveyed by professional women and men actors through their recorded voices. For example, we find that over 75% of the words used by the Halo to describe men's emotions are positive whereas fewer than 50% of the words used by the Halo to describe women's voices are positive. The Halo describes women as being 'angry', 'disappointed', 'uncomfortable', and 'annoyed' more often than men (adjectives with negative valence). The Halo describes men as being 'knowledgeable', 'confident', and 'focused' more often than women (adjectives with positive valence). Overall, our findings underscore that even commercially deployed ML models for day-to-day consumer use exhibit strong biases.
AB - In addition to being a health and fitness band, the Amazon Halo offers users information about how their voices sound, i.e., their 'tones'. The Halo's tone analysis capability leverages machine learning, which can lead to potentially biased inferences. We develop an auditing framework to evaluate the Amazon Halo's tone analysis capabilities for gender biases. Our results show that the Halo exhibits statistically significant gender biases, when the same emotion is conveyed by professional women and men actors through their recorded voices. For example, we find that over 75% of the words used by the Halo to describe men's emotions are positive whereas fewer than 50% of the words used by the Halo to describe women's voices are positive. The Halo describes women as being 'angry', 'disappointed', 'uncomfortable', and 'annoyed' more often than men (adjectives with negative valence). The Halo describes men as being 'knowledgeable', 'confident', and 'focused' more often than women (adjectives with positive valence). Overall, our findings underscore that even commercially deployed ML models for day-to-day consumer use exhibit strong biases.
UR - https://www.scopus.com/pages/publications/85177829944
U2 - 10.1145/3617694.3623241
DO - 10.1145/3617694.3623241
M3 - Conference contribution
AN - SCOPUS:85177829944
T3 - ACM International Conference Proceeding Series
BT - Proceedings of 2023 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, EAAMO 2023
PB - Association for Computing Machinery
Y2 - 30 October 2023 through 1 November 2023
ER -