Assessing political bias and value misalignment in generative artificial intelligence

Fabio Y. S. Motoki, Valdemar Pinho Neto, Victor Rodrigues

Research output: Contribution to journalArticlepeer-review

Abstract

Our analysis reveals a concerning misalignment of values between ChatGPT and the average American. We also show that ChatGPT displays political leanings when generating text and images, but the degree and direction of skew depend on the theme. Notably, ChatGPT repeatedly refused to generate content representing certain mainstream perspectives, citing concerns over misinformation and bias. As generative AI systems like ChatGPT become ubiquitous, such misalignment with societal norms poses risks of distorting public discourse. Without proper safeguards, these systems threaten to exacerbate societal divides and depart from principles that underpin free societies.
Original languageEnglish
Article number106904
JournalJournal of Economic Behavior & Organization
Early online date4 Feb 2025
DOIs
Publication statusE-pub ahead of print - 4 Feb 2025

Keywords

  • AI governance
  • Generative AI
  • Large language models
  • Multimodal
  • Societal values

Cite this