The Greek police (ELAS) recently found themselves entangled in a complex web of foreign-operated bots that flooded online platforms in the lead-up to massive protests over the tragic Tempi train accident. However, despite ongoing investigations, identifying the orchestrators behind these fake accounts remains an uphill battle.
The Digital Battlefield: How Bots Are Manipulating Discourse
Bots—automated social media accounts programmed to mimic human activity—are increasingly being used to amplify certain viewpoints, spread misinformation, and manipulate public sentiment. These accounts create the illusion of grassroots support or opposition, making it difficult for users to distinguish between genuine conversations and orchestrated campaigns.

Leading up to the large-scale demonstrations for the Tempi disaster, thousands of such bots were activated en masse, as noted by Greek authorities and even mentioned by Prime Minister Kyriakos Mitsotakis in a recent parliamentary session. While there’s no suggestion that these accounts influenced the turnout of the protests, officials argue that they were part of a coordinated effort to shape anti-government sentiment surrounding the investigation into the accident.
Bots in Action: A Global Strategy of Misinformation
This isn’t the first time bots have been deployed for political manipulation. Similar strategies were seen during the U.S. elections in 2016, when over 50,000 Twitter accounts linked to Russia were identified as spreading disinformation. Despite being suspended later, these bots had already served their purpose—amplifying fake news, promoting specific candidates, and influencing political discourse.
In Greece, bots played a significant role during the Evros migration crisis, though back then, their activity wasn’t as thoroughly investigated. Internationally, Romania recently faced a major election scandal when thousands of bot accounts worked to elevate a little-known far-right candidate, leading to the annulment of the first round of voting.

How Bots Work: A Closer Look at Their Influence
According to the Greek Cybercrime Division’s findings, the recent bot activity displayed clear signs of coordination:
- Mass account creation: Thousands of fake accounts were launched between December 2024 and February 2025.
- Engagement boosting: These accounts interacted with real users, including journalists and mainstream media pages, artificially inflating their influence.
- Multilingual presence: Many bots posted in multiple languages, including Russian, Turkish, and Arabic, suggesting an international operation.
- Algorithm manipulation: By mass-sharing and commenting on specific content, bots gamed social media algorithms to boost visibility and create viral trends.
Perhaps the most striking discovery was that some high-profile journalists had a bot-heavy following—up to 40% of their audience displayed bot-like characteristics. This raises serious concerns about the integrity of online discourse and the ease with which automated accounts can influence real people.

The Future of Online Manipulation: Can We Fight Back?
As bot technology advances, distinguishing between real and fake accounts is becoming increasingly difficult. Machine learning allows bots to evolve, interact more convincingly, and blend seamlessly with human users. Meanwhile, platform policies are shifting—Elon Musk’s X (formerly Twitter) and Meta are rolling back fact-checking initiatives in the name of free speech, making misinformation even harder to combat.
A German government report recently exposed a major bot-driven disinformation campaign designed to influence their elections, linking it to Russian cyber-operations. Similar tactics have been used in the past, with fake news sites impersonating credible media outlets to spread false narratives.
The challenge now lies in detecting and neutralizing these digital threats. Social media platforms, intelligence agencies, and fact-checking organizations must collaborate to develop stronger detection mechanisms. But as bot networks grow more sophisticated and decentralized, the battle against online propaganda is only getting harder.
One thing is clear: in the digital age, misinformation isn’t just a social media problem—it’s a geopolitical weapon.
Ask me anything
Explore related questions