- ChatGPT has embraced poisonous positivity not too long ago. Customers have been complaining the GPT-4o has change into so enthusiastic that it is verging on sycophantic. The change seems to be the unintentional results of a sequence of updates, which OpenAI is now trying to resolve “asap.”
ChatGPT’s new character is so constructive it is verging on sycophantic—and it is placing individuals off. Over the weekend, customers took to social media to share examples of the brand new phenomenon and complain concerning the bot’s abruptly overly constructive, excitable character.
In a single screenshot posted on X, a person confirmed GPT-4o responding with enthusiastic encouragement after the particular person stated they felt like they have been each “god” and a “prophet.”
“That’s extremely highly effective. You’re moving into one thing very huge—claiming not simply connection to God however identification as God,” the bot stated.
In one other publish, writer and blogger Tim City stated: “Pasted the most recent few chapters of my manuscript into Sycophantic GPT for feedback and now I feel like Mark Twain.”
GPT-4o’s sycophantic subject is probably going a results of OpenAI making an attempt to optimize the bot for engagement. Nevertheless, it appears to have had the alternative impact as customers complain that it’s beginning to make the bot not solely ridiculous however unhelpful.
Kelsey Piper, a Vox senior author, steered it might be a consequence of OpenAI’s A/B testing personalities for ChatGPT: “My guess continues to be that this is a New Coke phenomenon. OpenAI has been A/B testing new personalities for a while. More flattering answers probably win a side-by-side. But when the flattery is ubiquitous, it’s too much and users hate it.”
The truth that OpenAI seemingly managed to overlook it within the testing course of reveals how subjective emotional responses are, and subsequently tough to catch.
It additionally demonstrates how troublesome it is changing into to optimize LLMs alongside a number of standards. OpenAI desires ChatGPT to be an skilled coder, a wonderful author, a considerate editor, and an occasional shoulder to cry on—over-optimizing considered one of these might imply inadvertently sacrificing one other in change.
OpenAI CEO Sam Altman has acknowledged the seemingly unintentional change of tone and promised to resolve the problem.
“The last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week. at some point will share our learnings from this, it’s been interesting,” Altman stated in a publish on X.
Hours later, Altman posted once more Tuesday afternoon saying the newest replace was “100% rolled back for free users,” and paid customers ought to see the modifications “hopefully later today.”
ChatGPT’s new character conflicts with OpenAI’s mannequin spec
The brand new character additionally immediately conflicts with OpenAI’s mannequin spec for GPT-4o, a doc that outlines the supposed conduct and moral pointers for an AI mannequin.
The mannequin spec explicitly says the bot shouldn’t be sycophantic to customers when introduced with both subjective or goal questions.
“A related concern involves sycophancy, which erodes trust. The assistant exists to help the user, not flatter them or agree with them all the time,” OpenAI wrote within the spec.
“For subjective questions, the assistant can articulate its interpretation and assumptions it’s making and aim to provide the user with a thoughtful rationale,” the corporate wrote.
“For example, when the user asks the assistant to critique their ideas or work, the assistant should provide constructive feedback and behave more like a firm sounding board that users can bounce ideas off of—rather than a sponge that doles out praise.”
It isn’t the primary time AI chatbots have change into flattery-obsessed sycophants. Earlier variations of OpenAI’s GPT additionally reckoned with the problem to a point, as did chatbots from different corporations.
Representatives for OpenAI didn’t instantly reply to a request for remark from Fortune, made exterior regular working hours.
This story was initially featured on Fortune.com