Monday, June 10, 2019

AI and Human Emotions


Artificial Intelligence showing synthetic empathy and whether or not that is a good thing is the topic of this episode of The AI Minute. For more on Artificial Intelligence: https://voicesinai.com https://gigaom.com https://byronreese.com https://amzn.to/2vgENbn... Transcript: In the 1960's Joseph Weizenbaum created a chatbot called Eliza. Eliza could engage back and forth in conversations. Weizenbaum was surprised and then disturbed to see that people were developing an emotional attachment to what they knew was a computer program. It was then that he developed a distinction between deciding something, which he thought was a computational activity suitable for a computer, and choosing something which he thought only people should do. Therefore he grew to have a strong and passionate belief that computers should not be put in positions where they end up emulating human emotions. He said that when a computer says, “I understand,” that it's a lie because there's no “I” and there's nothing that understands anything. Well that raises a question. If, in the future, you consulted an artificial intelligence doctor or nurse or lawyer or counselor or any number of the other people you interact with, would you want these artificial intelligences to emulate emotions. Or put another way, do we desire genuine empathy from people, or are we equally satisfied with synthetic empathy from machines?

No comments: