Reducing Racial and Ethnic Bias in AI Models: A Comparative Analysis of ChatGPT and Google Bard Cover Image

Reducing Racial and Ethnic Bias in AI Models: A Comparative Analysis of ChatGPT and Google Bard
Reducing Racial and Ethnic Bias in AI Models: A Comparative Analysis of ChatGPT and Google Bard

Author(s): Tavishi Choudhary
Subject(s): Media studies, ICT Information and Communications Technologies
Published by: Scientia Moralitas Research Institute
Keywords: data bias; digital law; diversity; ethical artificial intelligence;
Summary/Abstract: 53% of adults in the US acknowledge racial bias as a significant issue, 23% of Asian adults experience cultural and ethnic bias, and more than 60% conceal their cultural heritage after racial abuse (Ruiz 2023). AI models like ChatGPT and Google Bard, trained on historically biased data, inadvertently amplify racial and ethnic bias and stereotypes. This paper addresses the issue of racial bias in AI models using scientific, evidence-based analysis and auditing processes to identify biased responses from AI models and develop a mitigation tool. The methodology involves creating a comprehensive database of racially biased questions, terms, and phrases from thousands of legal cases, Wikipedia, and surveys, and then testing them on AI Models and analyzing the responses through sentiment analysis and human evaluation, and eventually creation of an 'AI-BiasAudit,' tool having a racial-ethnic database for social science researchers and AI developers to identify and prevent racial bias in AI models.

  • Page Range: 115-124
  • Page Count: 10
  • Publication Year: 2024
  • Language: English
Toggle Accessibility Mode