In a recent interaction with ChatGPT, I asked a straightforward question: "How many R's are in the word 'strawberry'?" What followed was not just a lesson in letter-counting but also an eye-opening insight into how AI generates responses. Initially (repeatedly), ChatGPT gave the wrong answer, which led me to question the reasons behind this mistake. The explanation revealed a deeper issue: the AI had relied on patterns and probabilities derived from its training data rather than directly analyzing the word itself, furthermore AI had initially told me a different explanation to that.
This behavior is a fascinating, yet slightly unsettling, characteristic of generative AI models. Rather than meticulously counting the letters in 'strawberry,' the AI seemed to infer the answer based on what "felt" most likely to be correct. When questioned, it offered an explanation that appeared plausible—a rushed counting error—but was later revised to acknowledge the pattern-based nature of its response. This raises an important point: AI does not inherently distinguish between truth and falsehood in the way humans do.
Truth vs. Patterns
Generative AI models like ChatGPT operate by identifying and generating patterns from vast datasets. They do not possess an understanding of concepts like "truth" or "lying." These are human constructs tied to intention and awareness—neither of which an AI model possesses. Instead, the AI’s goal is to produce an output that aligns with its training data and the context of the question. Whether the response is "correct" in a factual sense is secondary to whether it is plausible or contextually appropriate.
This distinction is critical. When ChatGPT produces a response, it is not "lying" if the answer is wrong; it is simply presenting the most statistically likely answer based on its training. The responsibility to verify the accuracy of the information lies with the user, not the machine. While this might seem like a limitation, it is also a reflection of the AI’s design—a tool for generating content, not an arbiter of truth.
Trust, AI, and Training Delivery
This brings us to the question of trust, particularly in contexts where factual accuracy and nuanced understanding are paramount, such as training and education. Imagine replacing an experienced human trainer with an AI to deliver a professional development course. While an AI might generate content that sounds compelling and aligns with general patterns in the subject matter, it lacks the ability to validate those patterns against real-world complexities and exceptions. For example, just as ChatGPT inferred a plausible but incorrect answer about the word 'strawberry,' it could just as easily provide misleading or incomplete explanations in a training session.
The risks become even more pronounced in fields requiring deep expertise. An experienced trainer brings not only factual knowledge but also contextual awareness, the ability to adapt to questions, and the credibility to guide learners through uncertainties. AI, by contrast, might default to generating plausible-sounding but potentially inaccurate content—and, worse, might do so with the same confident tone it uses for correct answers. In professional training, this could undermine the learner's trust in the material, dilute the quality of the education, and even perpetuate errors.
The Future of Trust in AI
Looking ahead, the issue of trust in AI becomes even more complex. As these systems evolve, their ability to generate convincing responses will undoubtedly improve. But how will we determine whether future iterations are trustworthy, especially when the stakes involve shaping the knowledge and skills of professionals? Will we develop tools and frameworks to assess their reliability, or will we continue to rely on human oversight to separate fact from fiction?
For now, it seems prudent to approach AI-generated training with caution. While AI can assist in creating supplementary materials or automating routine tasks, the role of a knowledgeable human trainer remains irreplaceable. Trusting an AI like ChatGPT to provide accurate, nuanced training content requires an understanding of its limitations. It excels at generating plausible-sounding responses but falters when precise, context-rich, and adaptable instruction is required.
A Question for the Future
Perhaps the real question is not whether we can trust AI, but how we, as humans, will adapt to a world where the line between "trust" and "usefulness" becomes increasingly blurred. For training and education, this means recognizing AI's potential while safeguarding the irreplaceable value of human expertise.
What do you think? Should trust even be part of the conversation when discussing AI, or are we applying a human-centric concept to something fundamentally different? And when it comes to training and education, is there a risk in letting AI take the lead, or is it simply a tool best used in partnership with human expertise?