Alarmed by the growing risks posed by generative artificial intelligence (AI) platforms like ChatGPT, regulators and law enforcement agencies in Europe are looking for ways to slow humanity’s headlong rush into the digital future.
With few guardrails in place, ChatGPT, which responds to user queries in the form of essays, poems, spreadsheets and computer code, recorded over 1.6 billion visits since December. Europol, the European Union Agency for Law Enforcement Cooperation, warned at the end of March that ChatGPT, just one of thousands of AI platforms currently in use, can assist criminals with phishing, malware creation and even terrorist acts.
“If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps,” the Europol report stated. “As such, ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home to terrorism, cybercrime and child sexual abuse.”
Last month, Italy slapped a temporary ban on ChatGPT after a glitch exposed user files. The Italian privacy rights board Garante threatened the program’s creator, OpenAI, with millions of dollars in fines for privacy violations until it addresses questions of where users’ information goes and establishes age restrictions on the platform. Spain, France and Germany are looking into complaints of personal data violations — and this month the EU’s European Data Protection Board formed a task force to coordinate regulations across the 27-country European Union.
“It’s a wake-up call in Europe,” EU legislator Dragos Tudorache, co-sponsor of the Artificial Intelligence Act, which is being finalized in the European Parliament and would establish a central AI authority, told Yahoo News. “We have to discern very clearly what is going on and how to frame the rules.”
more at yahoo.com