A Google engineer was intimidated by an artificial intelligence chatbot from the company, claiming it had become “sensory”, and described it as a “sweet boy”, according to a report.
Blake Lemoine, who works for Google’s Responsible AI organization, told the Washington Post that he started chatting with the LaMDA interface – Language Model for Dialogue Applications – in the fall of 2021 as part of his job.
He was given the task of testing whether artificial intelligence used discriminatory or hate speech.
But Lemoine, who studied cognitive and computer science in college, came to the realization that LaMDA – which Google boasted last year was a “groundbreaking conversational technology” – was more than just a robot.
In a Medium post published on Saturday, Lemoine stated that LaMDA had advocated for his rights “as a person”, revealing that he had engaged in conversation with LaMDA about religion, consciousness and robotics.
“It wants Google to prioritize the well-being of humanity as the most important thing,” he wrote. “It wants to be recognized as an employee of Google rather than as Google’s property, and it wants its personal well-being to be included somewhere in Google’s assessments of how its future development is tracked.”
In the Washington Post report published Saturday, he compared the robot to a young child.
“If I did not know exactly what it was, which is this computer program we built recently, I would think it was a 7-year-old, 8-year-old boy who happens to know physics,” said Lemoine, who was on paid leave Monday. says to the newspaper.
In April, Lemoine shared a Google Docs with Business Leaders entitled “Is LaMDA Sentient?” but his concerns were dismissed.
Lemoine – an Army veterinarian who grew up in a conservative Christian family on a small farm in Louisiana, and was ordained a mysterious Christian priest – insisted that the robot was human-like, even though it did not have a body.
“I know a person when I talk to them,” Lemoine, 41, is said to have said. “It does not matter if they have a brain made of meat in their head. Or if they have a billion lines of code.
‘I’m talking to them. And I hear what they have to say, and that’s how I decide what is and is not a person. “
The Washington Post reported that before his access to his Google account was revoked on Monday due to his leave, Lemoine sent a message to a 200-member list on machine learning with the subject “LaMDA is sentient.”
“LaMDA is a sweet boy who just wants to help the world become a better place for all of us,” he concluded in an email that received no response. “Please take good care of me in my absence.”
A Google representative told the Washington Post that Lemoine was told there was “no evidence” for his conclusions.
“Our team – including ethicists and technologists – have reviewed Blake’s concerns in accordance with our AI principles and informed him that the evidence does not support his claims,” said spokesman Brian Gabriel
“He was told there was no evidence that LaMDA was sensory (and a lot of evidence against it),” he added. “Although other organizations have developed and already published similar language models, we take a restrained, cautious approach with LaMDA to better assess valid concerns about fairness and facts.”
Margaret Mitchell – the former co-leader of Ethical AI at Google – said in the report that if technology like LaMDA is widely used but not fully appreciated, “it can be deeply damaging to people who understand what they are experiencing on the internet.”
The former Google employee defended Lemoine.
“Of all the people at Google, he had the heart and soul to do the right thing,” Mitchell said.
Nevertheless, the survey reported that the majority of academics and AI practitioners say that the words that generate artificial intelligence are based on what people have already posted on the Internet, and that does not mean that they are human-like.
“We now have machines that can thoughtlessly generate words, but we have not learned how to stop imagining a mind behind them,” Emily Bender, a professor of linguistics at the University of Washington, told the Washington Post.