It might speak with the polish of a seasoned expert, but ChatGPT is still prone to moments of confusion—and even its own creator wants you to be careful about believing everything it says.
Even smart-sounding answers can be wrong
We’ve all been there. You ask ChatGPT a question, and out comes a perfectly structured, confidently written answer that sounds bang on. But don’t be fooled—just because it sounds right doesn’t mean it is. That’s the message Sam Altman, CEO of OpenAI, wants to make clear.
Speaking recently on a company podcast, Altman reminded users that ChatGPT is still an imperfect tool. Despite massive improvements in performance and reasoning, it still experiences what AI researchers call “hallucinations”—those moments when it produces false or misleading information with utter conviction.
“People place a surprising amount of trust in ChatGPT,” Altman said, “which is interesting, because the AI tends to hallucinate.” In other words, it’s great at sounding intelligent, but it can still get facts wrong. That’s why the platform includes a polite disclaimer beneath the response box: “ChatGPT can make mistakes. Consider checking important information.” And frankly, it’s advice worth following.
A tool, not a truth-teller
Altman was clear: ChatGPT should be a source of inspiration, not blind authority. His view? This technology should be something “you don’t trust too much.”
It’s a useful reminder in an age where digital tools are becoming ever more embedded in our decision-making. Whether you’re writing a cover letter or fact-checking an obscure bit of history, ChatGPT is a helpful assistant—not an all-knowing oracle.
Why there are no ads (for now)
Unlike many other free online services, ChatGPT doesn’t bombard you with ads. Currently, the only way it generates income is through subscriptions—users paying to access faster, more capable versions of the chatbot.
Altman isn’t against advertising as a business model. He even confessed to buying quite a few things off Instagram ads himself. But when it comes to ChatGPT, introducing sponsored content could prove tricky. Why? Because users already trust the AI deeply, and any suggestion that commercial interests might influence its responses would damage that trust.
The challenge would be to prove, convincingly, that advertising doesn’t compromise the neutrality or accuracy of its answers—a tough sell, especially when the product is built around trust and transparency.
A future funded by ads?
Despite the hurdles, monetising through advertising isn’t off the table. OpenAI, after all, is not yet profitable and continues to rely heavily on external funding to support its development and operations.
Advertising could help ease that financial pressure. But for now, Altman and his team are playing it cautious. Because when you’re in the business of building an AI people turn to for answers, earning—and keeping—their trust might be your most valuable asset.
- ChatGPT’s creator warns: don’t trust it blindly - 12 July 2025
- Meta accuses ChatGPT of cheating in the AI arms race - 11 July 2025
- Humans are starting to talk like AI—ChatGPT is reshaping language - 10 July 2025