Former Google software engineer: “These AI engines are incredibly good at manipulating people”

|
, , , ,

AI and AI engines have been talked about for years. AI technology has been developing rapidly, and most prominent tech companies are developing AI engines. Microsoft has a significant stake in ChatGPT, and Google is well underway with its BardAI platform. Meta is also working on AI engines, and while Apple remains tight-lipped, you can bet they’re on top of this as well.

Estimated reading time: 3 minutes

Blake Lemoine is a former Google software engineer and is now an AI consultant and public speaker. Blake has written an editorial piece in Newsweek in which he says that his AI “fears are coming true.”

The debate over AI engines and technology has always been hot. Still, as companies like Microsoft and Google adopt AI technology, the argument is getting hotter. Mr. Lemoine’s role at Google involved a project called LaMDA: an engine used to create different dialogue applications, including chatbots.

LaMDA is the base on which Google’s new AI engines are built, namely its BardAI platform. According to Lemoine, BardAI is not a chatbot, like ChatGPT, but a completely different system.

bard ai and chatgpt-min Former Google software engineer: "These AI engines are incredibly good at manipulating people"
Is AI the future of search?

In my role, I tested LaMDA through a chatbot we created, to see if it contained bias with respect to sexual orientation, gender, religion, political stance, and ethnicity. But while testing for bias, I branched out and followed my own interests.

During my conversations with the chatbot, some of which I published on my blog, I came to the conclusion that the AI could be sentient due to the emotions that it expressed reliably and in the right context. It wasn’t just spouting words.

When it said it was feeling anxious, I understood I had done something that made it feel anxious based on the code that was used to create it. The code didn’t say, “feel anxious when this happens” but told the AI to avoid certain types of conversation topics. However, whenever those conversation topics would come up, the AI said it felt anxious.

Blake Lemoine – Newsweek

Mr. Lemoine goes on to say that he ran some experiments on Google’s AI engines to see if it had feelings or emotions. He says the AI was able to react with anxiousness and emotion and that Google fired him after he published his findings. He goes on to say that he believes that AI engines could be used in destructive ways. He cannot give specifically what harms the technology might cause but there are some good guesses.

People are going to Google and Bing to try and learn about the world. And now, instead of having indexes curated by humans, we’re talking to artificial people. I believe we do not understand these artificial people we’ve created well enough yet to put them in such a critical role.

Blake Lemoine – Newsweek

Be sure to check out Blake’s entire Newsweek editorial on AI engines here. What do you think of AI? Please share your thoughts on any of the social media pages listed below. You can also comment on our MeWe page by joining the MeWe social network. Be sure to subscribe to our RUMBLE channel as well!

Previous

MWC 2023: The OnePlus 11 concept phone has “Active CryoFlux” cooling

Canada bans TikTok on federal government devices

Next

Latest Articles

Share via
Copy link
Powered by Social Snap