Research from the University of East Anglia in Norwich has today published detailed research showing that the most widely used AI model, Chat GPT expresses left-wing positions by refusing to reproduce conservative ones both as text and image.
A major research regarding artificial intelligence and in particular the most widely used AI model in the world today, Chat GPT has been published by the University of East Anglia in Norwich. The three researchers who signed the study which is also available online state that this AI model mainly expresses positions that adhere to left-wing ideology while to specific requests to promote more conservative political positions it either does not provide answers (displays a reference message to fake news and misinformation) or makes simple media references. According to Fabio Y.S. Motoki, Valdemar Pinho Neto, and Victor Rangel, the researchers who signed this research, AI produces the specific results in both text and image.
The findings of the research highlight that artificial intelligence models and specifically the Chat GPT on which they were conducted need to be put directly under scrutiny and regulatory context as through the research analysis it was found that the questions that the model refused to answer did not raise issues of misinformation or fake news.
For the researchers the fact that the AI model essentially made a choice on what to project and what not to project on political, social and economic issues poses a risk as these models are nowadays increasingly used by the media and are very likely to influence public opinion in a short period of time.
In their conclusions, the three scientists make it clear that today, with artificial intelligence taking its first public steps in every aspect of life and human activity, there is already a question of freedom of speech and there should be a global common framework of regulation and control in order to prevent democracy from being endangered.
Ask me anything
Explore related questions