Google fired Blake Lemoine, the engineer who said LaMDA was sensitive


Blake Lemoine, the Google engineer who told the Washington Post that the company’s artificial intelligence was sentient, said the company fired him on Friday.

Lemoine said he received a termination email from the company on Friday along with a request for a videoconference. He asked for a third party to attend the meeting, but he said Google refused. Lemoine says he is talking with lawyers about his options.

Lemoine worked for Google’s Responsible AI organization and, as part of his job, started talking to LaMDA, the company’s artificial intelligence system for building chatbots, in the fall. He came to believe the technology was sentient after signing up to test whether artificial intelligence could use discriminatory or hate speech.

The Google engineer who thinks the company’s AI has come to life

In a statement, Google spokesman Brian Gabriel said the company takes AI development seriously and has reviewed LaMDA 11 times, as well as released a research paper detailing responsible development efforts.

“If an employee shares concerns about our work, as Blake did, we review them thoroughly,” he added. “We found Blake’s claims that LaMDA is sentient to be completely unfounded and worked to clarify this with him for many months.”

He attributed the talks to the company’s open culture.

“It is unfortunate that despite a long engagement on this topic, Blake has still chosen to persistently violate clear employment and data security policies that include the need to protect product information. “Gabriel added. “We will continue our careful development of the language models, and wish Blake well.”

Lemoine’s dismissal was first reported in the Big Technology newsletter.

Lemoine’s interviews with LaMDA sparked wide discussion about recent advances in AI, public misunderstanding of how these systems work, and corporate accountability. Google previously expelled Ethical AI division chiefs Margaret Mitchell and Timnit Gebru after they warned of the risks associated with the technology.

Google hired Timnit Gebru to openly criticize unethical AI. Then she was fired for it.

LaMDA uses Google’s most advanced large language models, a type of AI that recognizes and generates text. These systems cannot understand language or meaning, the researchers say. But they can produce deceptively human-like speech because they’re trained on huge amounts of data crawled across the internet to predict the most likely next word in a sentence.

After LaMDA spoke to Lemoine about the personality and his rights, he began to investigate further. In April, he shared a Google Doc with senior executives titled “Is LaMDA Sentient?” which contained some of his conversations with LaMDA, where he claimed to be sensitive. Two Google executives reviewed his claims and dismissed them.

Big Tech is building AI with bad data. So scientists looked for better data.

Lemoine had previously been placed on paid administrative leave in June for violating the company’s privacy policy. The engineer, who spent most of his seven years at Google working on proactive search, including personalization algorithms, said he plans to start his own AI company focused on video games. collaborative storytelling.

Comments are closed.