Replication Data for: More Human than Human: Measuring ChatGPT Political Bias

  • Fabio Motoki (Creator)
  • Valdemar Pinho Neto (Creator)
  • Victor Rodrigues (Creator)



A standing issue is how to measure bias in Large Language Models (LLMs) like ChatGPT. We devise a novel method of sampling, bootstrapping, and impersonation that addresses concerns about the inherent randomness of LLMs and test if it can capture political bias in ChatGPT. Our results indicate that, by default, ChatGPT is aligned with Democrats in the US. Placebo tests indicate that our results are due to bias, not noise or spurious relationships. Robustness tests show that our findings are valid also for Brazil and the UK, different professions, and different numerical scales and questionnaires.
Date made available17 Mar 2023
PublisherHarvard Dataverse

Cite this