OpenAI cautions users: GPT-5 is powerful but not infallible

Achieving total reliability is a massive challenge, Nick Turley, the Head of ChatGPT, says

GPT-5
Google search engine
  • Despite ongoing improvements, ChatGPT should remain a “second opinion” tool—not a definitive source of truth.

OpenAI’s latest language model, GPT-5, has taken a significant leap in power and precision compared to earlier versions, but users are being cautioned not to place blind trust in its responses.

Nick Turley, Head of ChatGPT, recently underscored the fact that despite ongoing improvements, ChatGPT should remain a “second opinion” tool—not a definitive source of truth.

Turley, speaking with The Verge, was candid about the persistent issue of AI hallucinations. Even with advances in the underlying technology, GPT-5 occasionally generates information that appears convincing yet is factually incorrect.

OpenAI’s own assessments indicate that the model still produces wrong answers roughly 10 per cent of the time—an improvement, but not perfection.

Turley highlighted the complexity of the task: “Achieving total reliability is a massive challenge,” he said.

He made it clear that as long as language models lag behind human experts in their accuracy across all domains, OpenAI will continue to suggest users double-check the AI’s advice. “Until we are provably more reliable than a human expert across all domains, we’ll continue to advise users to double-check the answers,” Turley noted.

For now, ChatGPT is best seen as a supplement—an extra set of eyes on complicated questions, not the only authority.

Why errors still happen

Large language models like GPT-5 generate answers by recognising patterns in enormous text datasets. This allows them to excel at natural, humanlike conversation, but it also means they can present incorrect facts on topics that aren’t well-represented in their training data, or even invent plausible-sounding details that aren’t true.

To help users catch any slip-ups, OpenAI has equipped ChatGPT with search functionality—making it easier to verify answers by cross-referencing with trustworthy external sources.

Turley was optimistic about eventual solutions but tempered expectations by admitting that eliminating hallucinations will take time: “I’m confident we’ll eventually solve hallucinations, and I’m confident we’re not going to do it in the next quarter.”

Despite these challenges, OpenAI isn’t slowing its ambitions. Reports indicate the company is working on launching its own web browser, while CEO Sam Altman has even made tongue-in-cheek remarks about potentially buying Google Chrome if it ever came onto the market. Clearly, OpenAI intends to expand well beyond chatbots and continue shaping the way people interact with information online.


Discover more from TechChannel News

Subscribe to get the latest posts sent to your email.

https://www.techchannel.news/wp-content/uploads/2024/06/arrow.jpg