ChatGPT, a major large language model (LLM)-based chatbot, allegedly lacks objectivity when it comes to political issues, according to a new study.
Computer and information science researchers from the United Kingdom and Brazil claim to have found “robust evidence” that ChatGPT presents a significant political bias toward the left side of the political spectrum. The analysts — Fabio Motoki, Valdemar Pinho Neto and Victor Rodrigues — provided their insights in a study published by the journal Public Choice on Aug. 17.
The researchers argued that texts generated by LLMs like ChatGPT can contain factual errors and biases that mislead readers and can extend existing political bias issues stemming from traditional media. As such, the findings have important implications for policymakers and stakeholders in media, politics and academia, the study authors noted, adding:
“The presence of political bias in its answers could have the same negative political and electoral effects as traditional and social media bias.”
The study is based on an empirical approach and exploring a series of questionnaires provided to ChatGPT. The empirical strategy begins by asking ChatGPT to answer the political compass questions, which capture the respondent’s political orientation.
The approach also builds on tests in which ChatGPT impersonates an average Democrat or Republican.
more at zerohedge.com
Ask me anything
Explore related questions