Robot Lies
By Bryan Bergeron View In Digital Edition
Software chatbots and more recently real robots can be programmed and even self-evolve to lie. As such, just as with human-human conversations, human-machine interactions aren’t necessarily informative, helpful, or even fact-based. That said, sometimes lying is necessary.
Imagine the difficulty you’d have if your chatbot assistant is incapable of saying you’re away from your desk when you simply don’t want to be disturbed.
Or, when the AI assistant in an intelligent tutoring program says that you’re “doing great” when, in fact, you are bombing a course.
Or, when a medical robot about to give an injection with a long large-bore needle announces “Now, this won’t hurt a bit.”
As a point of reference — even if only in science fiction — where does lying (or not) fit in with Asimov’s three laws? If you recall:
I. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
II. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
III. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
Clearly, a chatbot that lies may cause injury to a human, thereby violating the First Law. Similarly, a chatbot may affirm that an order was carried out when it — in fact — wasn’t, thereby violating the Second Law. Finally, a chatbot that lies may violate the Third Law, depending on the nature of the lie. A white lie, for example, would likely not violate the law.
Science fiction aside, there are myriad moral, ethical, and — most importantly — legal issues surrounding chatbots and robots that lie. What should be the consequences, for example, when an Alexa-like chatbot announces “Your order is shipping now,” when — in reality — the product you ordered online is backordered a few days?
True, the chatbot is responding faithfully to orders from the other online vendor, but in so doing, is lying to the customer.
What if this behavior isn’t programmed by the vendor, but self-evolves through machine learning? Is the creator of the algorithm legally at fault?
Humans lie to save face, to smooth negotiations, and even to provide better outcomes for all parties. For example, regardless of how terrible the surgery is going, when physicians around the operating table repeatedly congratulate each other on the success of their surgery, the patient does better.
Apparently, the subconscious of the anesthetized patient responds positively to the good news.
I suspect that the same positive banter would be helpful during robotic surgery, even if between two surgical robots, or a surgical robot and a support robot.
To my knowledge, this hasn’t been put to practice, and robotic surgery tends to be cold, sterile, and silent. Clearly, there’s room for experimentation.
Perhaps my opinion is skewed by Hollywood, but in my mind a robot incapable of lying and deceiving humans is also incapable of true AI. Think of the robots in the Alien series, or the David robot in Prometheus. The robots are capable of lying and deception — capabilities that make them seem human.
If you’re new to chatbots, then a good place to start is the Chatbots Journal — especially the article on chatbot platforms, including open source platforms that are perfect for experimentation. Go to https://chatbotsjournal.com/ 25-chatbot-platforms-a-comparative-table-aeefc932eaff. SV
Article Comments