Exclusive Offer: Sign up for any LeadsHouse Plan and Get a Lead-Generating Website Built for You!

The Causes of Chatbot Failures Explored at World AI Show Singapore 2018

Gerardo Salandra, the CEO of Rocketbots and Chairman of the AI Society of Hong Kong, addressed an enthusiastic audience at the World AI Show in Singapore, focusing on the reasons behind Chatbots’ shortcomings.

Rocketbots, established by Salandra, initially operated as a chatbot agency in Hong Kong before evolving into an AI-driven assistance platform. The agency’s early days provided valuable insights into the chatbot industry, particularly its technological flaws and their impact on businesses’ perceptions of AI. According to Gerardo, one of the key issues with Chatbots lies in their persistent failures. In virtually every interaction between bots and humans, a certain degree of failure is almost inevitable. This remains consistent in the realm of Retrieval based AI, which Salandra contends is scarcely deserving of the AI label. It relies on basic decision trees to simulate conversations and utilizes a computer’s speed to create the illusion of intelligence. Learning in Retrieval based AI is derived from input data and machine learning aimed at refining an inherently flawed decision tree. Insufficient training, subpar data, unforeseen conversational challenges, and colloquial human language are primary factors contributing to the failure of a chatbot.

Enterprises of considerable scale have allocated substantial resources to capitalize on chatbots, especially in terms of reducing costs associated with customer support staff. Nevertheless, the failures of chatbots have left a negative impression on customers and, consequently, on the companies that invested time and capital in what proved to be an inefficient solution. This realization prompted Rocketbots to transition to a SaaS model for customer communication platform, along with a novel form of AI known as Neural Networked AI. Neural Networks acquire knowledge in a manner akin to humans; positive feedback reinforces sound decisions, while negative feedback prevents the adoption of poor choices. While a model of this nature undoubtedly addresses a majority of the issues that plague Retrieval based chatbots, it is not without its own shortcomings.

Salandra cites the example of Microsoft’s AI Tay, which learned from interactions on social media during a public demonstration. Tay assimilated all provided data, culminating in an AI with opinions and statements that were offensive, racist, and laden with profanity. This instance demonstrates that Neural Networks require proper nurturing, akin to a young child, in order to cultivate optimal behavior. While it may take time before AI can be fully autonomous in human-bot interactions, with effective nurturing and clearly defined objectives, there is reason to believe that conversations could be automated to a certain extent.

If you want to learn more, visit our website today!


Leave a Reply

Your email address will not be published. Required fields are marked *