It takes a lot of guts to go public about a project you’re working on for Google. Blake Lemoine was a Google software engineer working on artificial intelligence (AI) for Google. The company has been developing a chatbot called LaMDA, and Lemoine was one of the project’s key players.
Estimated reading time: 0 minutes
Lemoine had been working on LaMDA when he discovered, what he says, that LaMDA was becoming self-aware. This means that the LaMDA was conversing and interacting with Lemoine as if it understood its surroundings and as if it was living in the real world. Many critics of AI have been saying this could happen if we continued pushing the technology.
In the video below, Blake Lemoine joins Bloomberg’s Emily Chang to talk about some of his experiments that led him to think that LaMDA was a sentient AI and to explain why he is now on administrative leave.
After Lemoine had come out to the press with his news, Google placed him on leave. The company has decided to fire Lemoine, citing that he violated company policies and that Lemoine’s claims are “wholly unfounded.”
“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” a Google spokesperson said in an email to Reuters.Reuters
Google and its scientists have criticized Lemoine as misguided and attributing his findings to complicated algorithms designed to return what seems self-awareness but is not.
What do you think? Do you think Lemoine was right about LaMDA? Do you think Google is covering its tracks? Or do you think Lemoine was exaggerating his findings? Please share your thoughts on any of the social media pages listed below. You can also comment on our MeWe page by joining the MeWe social network.