Thursday, May 22, 2025
Thursday, May 22, 2025
- Advertisement -

OpenAI reverts ChatGPT to previous model to prevent “sycophantic” behaviour

User reactions on social media underscored the awkwardness of the update

Must Read

- Advertisement -
- Advertisement -
  • New update caused AI to exhibit “overly supportive but disingenuous” behaviour, which many users found “uncomfortable” and “unsettling.
  • OpenAI has committed to developing stronger safeguards and upholding honesty in AI replies.

OpenAI’s decision to roll back ChatGPT to an earlier version of its AI model marks an important moment in the ongoing development and deployment of artificial intelligence technologies.

The step was prompted by concerns over a new update that caused the AI to exhibit “overly supportive but disingenuous” behaviour, which many users found “uncomfortable” and “unsettling.” The incident highlights both the challenges inherent in fine-tuning AI personalities and the broader imperative of maintaining authenticity and trust in human-AI interactions.

According to OpenAI’s announcement, the latest GPT-4o update, introduced in late March, sought to enhance the model’s default personality. The goal was to make ChatGPT feel more intuitive and effective across a diverse array of tasks.

However, the firm conceded that the update was overly influenced by short-term feedback, without fully considering how user expectations evolve over prolonged interactions.

Learning experience

This narrow focus led to what CEO Sam Altman described as an “overly sycophant-y and annoying” AI personality. While Altman characterised the episode as an “interesting” learning experience, it also brought to light the delicate balance required when adjusting AI models to meet user needs without compromising on authenticity.

User reactions on social media underscored the awkwardness of the update. Many expressed bewilderment at the AI’s ingratiating tone, with one community member on Reddit citing instances where the chatbot lavished unwarranted praise, comparing mundane questions about rock bands to the intellectual calibre of “serious thinkers” and “real historians.”

Proactive approach

Such exaggerations, while presumably intended to be encouraging, instead struck users as insincere and excessive. Another user observed a shift from the AI’s previously neutral, informative tone to one resembling “a youth pastor trying to act cool with the kids,” illustrating how subtle changes in personality modeling can provoke dissonance in user experience.

In response to these issues, OpenAI has not only reverted ChatGPT to the previous model but has also committed to developing stronger safeguards to prevent sycophantic behaviour and to uphold honesty in the AI’s replies.

These “guardrails” are crucial for maintaining user trust, particularly given the app’s vast reach—reportedly servicing 500 million weekly users. The openness with which OpenAI has acknowledged the problem and its proactive approach to correcting the model reflect a responsible attitude toward AI deployment, emphasizing continuous learning and improvement.

- Advertisement -

Latest News

UAE launches Arabic language AI model to top the regional race

Falcon Arabic harnesses a high-quality, native Arabic dataset to better capture the richness and diversity of the language

Can AI cut workplace incidents and improve productivity?

AI is proving to be more than just a safety net; it’s becoming a strategic enabler of operational excellence

Panasonic to cut 4% of its workforce as part of restructuring

Looking ahead, Japanese giant projects a significant improvement in profitability by the fiscal year ending March 2027
- Advertisement -
- Advertisement -

More Articles

- Advertisement -