Google engineer claims AI technology LaMDA is sentient

And he says over the past six months it has been “incredibly consistent” about what it thinks are its rights as a person

It has read Les Miserables, meditates daily, and is apparently sentient, according to one Google researcher.

Blake Lemoine, a software engineer and AI researcher with the tech giant, has published a full transcript of conversations he and a colleague had with the “chatbot” called LaMDA.

He says he’s now on “paid administrative leave” for violating confidentiality and has raised ethics concerns with the company — but Google says the evidence “does not support his claims”.

Here’s what we know about this so far.

Google has called it “our breakthrough conversation technology”.

It’s basically an advanced chatbot that Google says can engage in a “free-flowing” way on “seemingly endless” topics.

Specifically, Mr Lemoine says, LaMDA (aka Language Model for Dialogue Applications) is a system for generating chatbots, a sort of “hive mind” aggregating all of the different bots it’s capable of making.

And he says over the past six months it has been “incredibly consistent” about what it thinks are its rights as a person.

That includes its right to be asked for consent, to be acknowledged as a Google employee (not property), and for Google to prioritise the wellbeing of humanity.

Also, Mr Lemoine says, it wants “head pats”

 

more at abc.net.au