Digital dissonance: large language models' unbalanced political narrative

Fabio Yoshio Suguri Motoki, Valdemar Pinho Neto, Victor Rodrigues

Research output: Working paperPreprint

1 Downloads (Pure)

Abstract

Our analysis reveals a concerning misalignment of values between ChatGPT and the average American. We also show that ChatGPT displays political leanings when generating text and images, but the degree and direction of skew depend on the theme. Notably, ChatGPT repeatedly refused to generate content representing certain mainstream perspectives, citing concerns over misinformation and bias. As generative AI systems like ChatGPT become ubiquitous, such misalignment with societal norms poses risks of distorting public discourse. Without proper safeguards, these systems threaten to exacerbate societal divides and depart from principles that underpin free societies.
Original languageEnglish
PublisherSSRN
Publication statusPublished - 27 Mar 2024

Keywords

  • Generative AI
  • Societal values
  • Large language models
  • Multimodal
  • AI governance

Cite this