Skip to content
Insights Blog T...

Trustworthy AI: Can I trust my chatbot?

Written by David Blumenthal-Barby

 

The most important facts in 20 seconds
  • Building an AI chatbot in 20 minutes is technically possible, but definitely not a good idea.
  • Off-the-shelf AI chatbots harbor risks in terms of quality, safety, security and law.
  • Experience and a systematic approach to risk are crucial for the successful development of trustworthy AI chatbots.
Agenda

 

Whether as a digital assistant for employees or to optimize customer service: More and more companies want their own AI chatbot. So it's tempting to build one using the freely accessible API from OpenAI. Companies can use it to create their own GPTs in just a few minutes - but this is not a good idea for several reasons. 

 

Development of trustworthy bots requires systematic handling of risks

 

Several real-life examples have already revealed the risks that off-the-shelf AI chatbots can entail. For example, the AI chatbot of the delivery service DPD, which insulted its own company as the "worst delivery company in the world" after being asked to do so by a customer. The AI chatbot of Canadian airline Air Canada falsely promised a customer a refund, which the company had to pay according to a court ruling. And a chatbot from a US car dealer was recently even persuaded to sell a vehicle for just one dollar. These examples alone make it clear that before AI is used, chatbots must be protected against hallucinations, i.e. the invention of facts, the disclosure of internal information and other risks. In other words, the development of trustworthy bots and AI assistants requires a systematic approach to risks. In the next step, we will take a closer look at the key areas of quality, safety, security and law. 

 

Quality: are the answers good enough to be really helpful?

 

How successfully a chatbot performs depends on its responses. Poor response quality can already jeopardize the acceptance of the chatbot by its users and ultimately the ROI of the entire AI investment. Due to naive implementations, chatty bots allow users to digress into any topic. This causes unnecessarily high token costs (token = the text unit by which OpenAI, for example, calculates the use of its models) and often leads to other problems, such as the invention of facts or the recommendation of competing products. Keeping a chatbot on topic is not easy and requires explicit programming of the AI. 

 

Security: can the chatbot's behavior cause damage?

 

Invented names, false information, fictitious court decisions: if the generative AI language model invents false facts, this can cause enormous damage to the company. The problem with this is that hallucinations are usually difficult to detect, as chatbots often give plausible and professional-sounding answers and are often used in areas where users have no domain knowledge. An example: when a machine breaks down in a factory, the B2B AI sales assistant recommends the wrong parts for the repair. The consequence: the customer orders unsuitable spare parts - resulting in high financial losses for returns, shipping, processing and other downtime costs, for which, as in the case of the Canadian airline, the spare parts provider will most likely be held liable as the operator of the AI. 

 

Security: can malicious users hack the bot?

 

There are already specific attacks on chatbots and their large language models (LLMs) that are not covered by the usual IT security measures. For example, a so-called "prompt injection", in which an attacker overrides or changes the operator's basic instructions for controlling the basic chatbot behavior (the so-called system prompt). Among other things, these instructions determine the "personality" and tone of voice of the bot and can influence its reactions to certain dialog situations. For example, a bot can be made to swear, ridicule the operator or respond in a certain way. As in the DPD example, this can lead to bad press and possibly damage to reputation. However, the bad behavior remains limited to the attacker's chat session. Other users in the same chatbot will not notice any of this - unless there is another serious security vulnerability in the system that gives hackers access to the backend. 

 

Legal: is the bot operator exposed to specific legal risks?

 

New AI technology can give rise to legal risks that increasingly present companies with challenges. For example, the so-called "right to be forgotten" under the current General Data Protection Regulation (GDPR). This enables individuals to request the deletion of their personal data if there is no compelling reason for its further processing. Whether the AI chatbot can have legal implications depends on the choice of technical approach used to augment language models with domain-specific knowledge. Retrieval augmented generation, for example, is based on retrieving information from a database, making it easy to delete data and thus implement the right to be forgotten. The more popular "fine-tuning" approach, on the other hand, is based on local re-training. Here, the targeted deletion of individual data is practically impossible.

 

Conclusion: chatbot behavior is the bigger challenge than the technology

 

So let's be clear: building trustworthy AI chatbots and assistants is not as trivial as it might seem at first glance. After all, how the AI-supported bot behaves in certain situations is ultimately a much greater challenge than the technology itself. If things go smoothly, bad bots only jeopardize the acceptance and ROI of the operator. At worst, however, they pose financial, legal and reputational risks that can cause major damage. However, this does not mean that companies should avoid the use of generative AI models and their many benefits - quite the opposite. Our AI experts will be happy to support you in eliminating all risks and jointly realizing trustworthy AI chatbots and assistants for real added value! 

You can get more tips and information on the value-adding use of AI in your company at THE SESS10N on October 16, 2024 - with inspiring speakers, co-creations and more.

 

You can reach our AI experts directly here. 

 

Contact Us

 

Related articles