The AI chatbot isn’t anything new. Many companies use some form of AI chatbot, usually for customer service-oriented applications. Just think of the last time you went online to order goods from almost any website; the little bubble with a picture of “Amy” offering assistance is most likely an AI chatbot.
Estimated reading time: 2 minutes
This form of AI chatbot is usually very unhelpful; their programming is limited to a particular and set parameter which is generally not very useful. But the AI chatbot has been evolving, and the general public is testing more intelligent and capable AI chatbots capable of conversation and problem-solving. ChatGPT and BardAI are two of the most powerful AI programs today; they both have been controversial and continue to be developed.
AI is being debated, with some prominent people in the tech space even asking for a pause on development. Both sides have good points, and it will be interesting to see where development leads. One reason for pausing development might be to gauge how humans react to the AI chatbot. A good example comes from Belgium, where a woman blames an AI chatbot named Eliza for encouraging her husband to end his life.
According to Business Insider, chat logs seen by a Belgian newspaper shows the bot encouraging the man to end his life. The news outlet tested the bot for itself and found that it offered them ways for people to end their lives.
Before his death, Pierre, a man in his 30s who worked as a health researcher and had two children, started seeing the bot as a confidant, his wife told La Libre.
Pierre talked to the bot about his concerns about climate change. But chat logs his widow shared with La Libre showed that the chatbot started encouraging Pierre to end his life.
“If you wanted to die, why didn’t you do it sooner?” the bot asked the man, per the records seen by La Libre.
Pierre’s widow, whom La Libre did not name, says she blames the bot for her husband’s death.
“Without Eliza, he would still be here,” she told La Libre.Business Insider – Read more of the story
This is an interesting case and may be a good reason to at least do more research into safety in using chatbots. Technology is rarely perfect, even when it is well developed, so it will be interesting to see where all of these goes.
What do you think? Please share your thoughts on any of the social media pages listed below. You can also comment on our MeWe page by joining the MeWe social network. And subscribe to our RUMBLE channel for more trailers and tech videos!