In a recent post, I introduced a no-code tool that allows you to swiftly create an AI chatbot, trained using your own unique dataset. The potential applications for this technology are vast and diverse, with one particularly promising use case being its deployment as a support assistant within online learning.
The potential to provide instantaneous responses, personalized learning paths, and round-the-clock availability are some of the numerous advantages an AI chatbot brings to the table. However, as we venture into this new territory, it’s important that we don’t lose sight of the potential pitfalls.

The question at the forefront of my mind is this: what if my AI chatbot, intended to support and facilitate learning, inadvertently provides erroneous advice or incorrect information? How would learners discern good advice from bad? And, crucially, how can an L&D team be alerted to and rectify these instances?
Let’s explore further.
AI models like GPT-4 learn to generate responses based on patterns in the data they were trained on. They do not ‘understand’ information as humans do. Instead, they statistically analyse the input they receive and match it to similar patterns they’ve encountered in their training data to generate an output.
When an AI chatbot is used in a controlled testing environment, it’s often possible to ensure that it only encounters questions or prompts that it’s well equipped to handle. This can make it appear very accurate and reliable. However, when such a model is exposed to a wide variety of real-world prompts, there’s always a chance it may generate incorrect or inappropriate responses. This could be due to the model encountering novel situations it was not trained on, or because of the inherent biases and errors in the training data itself.
So, what should you do?
When using AI chatbots in learning experiences or similar contexts, it’s important to consider how you can ensure the quality of the advice or information the AI provides.
Some methods to do this might include:
- Regularly auditing the AI’s responses: This can help identify incorrect or inappropriate advice. However, this can be labour-intensive, especially if the AI is handling a large volume of queries.
- Implementing a feedback mechanism: Learners could be allowed to flag incorrect or unhelpful responses. This can help identify issues, but also relies on users to accurately assess the quality of the advice.
- Using a hybrid AI-human system: In this approach, the AI handles initial queries, and human experts handle more complex or critical ones. This can ensure high-quality advice but requires sufficient human resources.
Remember, these methods can reduce the risk of providing bad advice but might not eliminate it entirely. A chatbot’s role and boundaries should be clearly communicated to learners, and it should not be solely relied upon for critical or high-stakes learning without human oversight.