Sam Altman, the influential CEO of OpenAI, has delivered a crucial message about artificial intelligence: its confident demeanor can mask its tendency to “hallucinate.” Speaking on the first episode of OpenAI’s official podcast, Altman specifically warned against placing excessive trust in AI tools like ChatGPT, noting his surprise at the “very high degree of trust” people already exhibit.
“AI hallucinates. It should be the tech that you don’t trust that much,” Altman declared, highlighting a fundamental flaw in current AI models. This candid admission from a leading figure in AI development underscores the importance of critical thinking when consuming AI-generated content. The risk lies in accepting as fact information that is confidently, but incorrectly, produced by the AI.
He shared a personal anecdote to illustrate AI’s integration into daily life, even his own, recounting how he uses ChatGPT for parenting queries such as solutions for diaper rashes and baby nap routines. This practical example, while showcasing AI’s convenience, also serves as a subtle reminder that independent verification is crucial for sensitive or critical information.
Furthermore, Altman touched upon evolving privacy concerns within OpenAI, acknowledging that the exploration of an ad-supported model has raised new questions. These privacy discussions occur amidst ongoing legal challenges, most notably The New York Times’ lawsuit accusing OpenAI and Microsoft of using its content without permission. In a significant shift from earlier statements, Altman also indicated that new hardware would be essential for AI’s widespread adoption, as current computers are not designed for an AI-centric world.
