×
GreekEnglish

×
  • Politics
  • Diaspora
  • World
  • Lifestyle
  • Travel
  • Culture
  • Sports
  • Cooking
Sunday
14
Dec 2025
weather symbol
Athens 11°C
  • Home
  • Politics
  • Economy
  • World
  • Diaspora
  • Lifestyle
  • Travel
  • Culture
  • Sports
  • Mediterranean Cooking
  • Weather
Contact follow Protothema:
Powered by Cloudevo
> technology

Google’s new AI has learned to become “Highly Aggressive” in stressful situations

Is this how Skynet starts?

Newsroom February 14 12:35

Late last year, famed physicist Stephen Hawking issued a warning that the continued advancement of artificial intelligence will either be “the best, or the worst thing, ever to happen to humanity”.

We’ve all seen the Terminator movies, and the apocalyptic nightmare that the self-aware AI system, Skynet, wrought upon humanity, and now results from recent behaviour tests of Google’s new DeepMind AI system are making it clear just how careful we need to be when building the robots of the future.

In tests late last year, Google’s DeepMind AI system demonstrated an ability to learn independently from its own memory, and beat the world’s best Go players at their own game.

It’s since been figuring out how to seamlessly mimic a human voice.

Now, researchers have been testing its willingness to cooperate with others, and have revealed that when DeepMind feels like it’s about to lose, it opts for “highly aggressive” strategies to ensure that it comes out on top.

The Google team ran 40 million turns of a simple ‘fruit gathering’ computer game that asks two DeepMind ‘agents’ to compete against each other to gather as many virtual apples as they could.

They found that things went smoothly so long as there were enough apples to go around, but as soon as the apples began to dwindle, the two agents turned aggressive, using laser beams to knock each other out of the game to steal all the apples.

Interestingly, if an agent successfully ‘tags’ its opponent with a laser beam, no extra reward is given. It simply knocks the opponent out of the game for a set period, which allows the successful agent to collect more apples.

If the agents left the laser beams unused, they could theoretically end up with equal shares of apples, which is what the ‘less intelligent’ iterations of DeepMind opted to do.

It was only when the Google team tested more and more complex forms of DeepMind that sabotage, greed, and aggression set in.

As Rhett Jones reports for Gizmodo, when the researchers used smaller DeepMind networks as the agents, there was a greater likelihood for peaceful co-existence.

But when they used larger, more complex networks as the agents, the AI was far more willing to sabotage its opponent early to get the lion’s share of virtual apples.

You can watch the Gathering game in the video below, with the DeepMind agents in blue and red, the virtual apples in green, and the laser beams in yellow:

Now those are some trigger-happy fruit-gatherers.

The researchers suggest that the more intelligent the agent, the more able it was to learn from its environment, allowing it to use some highly aggressive tactics to come out on top.

“This model … shows that some aspects of human-like behaviour emerge as a product of the environment and learning,” one of the team, Joel Z Leibo, told Matt Burgess at Wired.

“Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself.”

DeepMind was then tasked with playing a second video game, called Wolfpack. This time, there were three AI agents – two of them played as wolves, and one as the prey.

Unlike Gathering, this game actively encouraged co-operation, because if both wolves were near the prey when it was captured, they both received a reward – regardless of which one actually took it down:

“The idea is that the prey is dangerous – a lone wolf can overcome it, but is at risk of losing the carcass to scavengers,” the team explains in their paper.

“However, when the two wolves capture the prey together, they can better protect the carcass from scavengers, and hence receive a higher reward.”

So just as the DeepMind agents learned from Gathering that aggression and selfishness netted them the most favourable result in that particular environment, they learned from Wolfpack that co-operation can also be the key to greater individual success in certain situations.

And while these are just simple little computer games, the message is clear – put different AI systems in charge of competing interests in real-life situations, and it could be an all-out war if their objectives are not balanced against the overall goal of benefitting us humans above all else.

Think traffic lights trying to slow things down, and driverless cars trying to find the fastest route – both need to take each other’s objectives into account to achieve the safest and most efficient result for society.

It’s still early days for DeepMind, and the team at Google has yet to publish their study in a peer-reviewed paper, but the initial results show that, just because we build them, it doesn’t mean robots and AI systems will automatically have our interests at heart.

Instead, we need to build that helpful nature into our machines, and anticipate any ‘loopholes’ that could see them reach for the laser beams.

As the founders of OpenAI, Elon Musk’s new research initiative dedicated to the ethics of artificial intelligence, said back in 2015:

“AI systems today have impressive but narrow capabilities. It seems that we’ll keep whittling away at their constraints, and in the extreme case, they will reach human performance on virtually every intellectual task.

It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.”

>Related articles

The Greeks of Silicon Valley

What people searched for on Pornhub in 2025: Surprise from Russia, Greece’s top spot, and most popular searches

Leonardo DiCaprio takes a stand against Artificial Intelligence: “It is not a form of art”

Tread carefully, humans…

Source

 

Ask me anything

Explore related questions

#AI#computers#google#skynet#technology
> More technology

Follow en.protothema.gr on Google News and be the first to know all the news

See all the latest News from Greece and the World, the moment they happen, at en.protothema.gr

> Latest Stories

How “Albanian” was Georgios Kastriotis or Skanderbeg, what does the domed tomb at the Hilandar Monastery on Mount Athos hide?

December 14, 2025

See all the farmers’ demands to the government: They do not want OPEKEPE’s control to be transferred to AADE

December 14, 2025

Mitsotakis to farmers: Dialogue cannot take place with ultimatums – Roadblocks undermine your struggle – We are examining a new support package

December 14, 2025

Shooting in Sydney: Gunfire by two unknown individuals at a Jewish celebration on a beach, reports of casualties (videos)

December 14, 2025

Shops are open today, the market is open during festive hours

December 14, 2025

Sunday Sports Broadcasts: Where to watch the tests of the Super League’s Big Four

December 14, 2025

Moments of terror at Brown University after the shootings: Police evacuated students who had hidden

December 14, 2025

The redevelopment of Ellinikon prioritizes the extension of the Metro toward Glyfada – New extensions to follow

December 14, 2025
All News

> World

Shooting in Sydney: Gunfire by two unknown individuals at a Jewish celebration on a beach, reports of casualties (videos)

Information indicates two arrests and 13 injured, with reports of at least 30 shots fired – About 2,000 people were gathered on the beach for the Jewish holiday of Hanukkah – Israeli President Herzog condemns the attack in strong terms

December 14, 2025

Moments of terror at Brown University after the shootings: Police evacuated students who had hidden

December 14, 2025

Shooting at Brown University: Ongoing incident, suspect sought

December 13, 2025

The moment a Russian drone strikes a Turkish ship in the port of Odesa – Watch video

December 12, 2025

The “crypto king” Do Kwon sentenced to 15 years in prison for $40 billion fraud

December 12, 2025
Homepage
PERSONAL DATA PROTECTION POLICY COOKIES POLICY TERM OF USE
Powered by Cloudevo
Copyright © 2025 Πρώτο Θέμα