As a long-time compulsive Internet user, I am aware of the emotional and psychological risks of this new technology. For now, the ability to find information faster (with certain caveats) means I actually spend less time on the internet than before.
But chatgpt for example showers the user with compliments. I'm sure this encourages user engagement, but it is eerily similar to the "love bombing" of cults from the 70s and 80s. I don't know how to reconcile the long-term risks with the huge short-term gains in productivity.
Are there any technologies or apps that are worse than others, particularly for people with obsessive/compulsive tendencies?
A long time ago I added "be brief" to ChatGPT's custom instructions and I seem to get less blather. Haven't tried it without for a while, though.
Same, I've also had good results with "Don't ramble".
That’s better than “debrief”, I guess.
"you're a terse assistant with minimal affect."
It (and the rest of the blather in responses) is one of the two biggest factors keeping me from using ChatGPT more. But I assume they have numbers showing that people for some reason want it.
I've had custom instructions for ChatGPT for a couple years now to respond in as short and straightforward a way as possible (including quite a few more guidelines, like no exclamation points etc.). I recommend setting up something like that, it helps a lot to avoid blathering and sycophancy.
I version my custom instructions for ChatGPT in a private repo, it's currently over 200 words long.
At first, I was concerned for how it'd affect performance by polluting the context window with such a long prefix. Then when one of the model's ChatGPT system prompts was leaked, and I saw it was huge by comparison. So I figured it's probably okay.
Highly encourage people to take advantage of this feature. Ask it to not do the things that annoy you about its "personality" or writing style.
I don't even think it's necessarily intentional. The idea of a 'yes man' being successful is very common for humans, and the supply is artificially constrained by the fact that it feels bad to be a sycophant. When you have a bunch of people tuning a model, its no surprise to me that the variants who frequently compliment and agree with the tester float to the top.
It's funny how pendulum-like life situations can be. On one extreme of the pendulum pulled to its highest point on one side is abuse and constant berating. Once you let the pendulum go, it has to swing all the way to the highest point on the other side with the 70s/80s cult of everything is love, man. At some point, the pendulum eventually settles back to a point of equilibrium in the middle. Unless someone manipulates it again, which seems to always happen.
> As a long-time compulsive Internet user, I am aware of the emotional and psychological risks of this new technology.
> Are there any technologies or apps that are worse than others, particularly for people with obsessive/compulsive tendencies?
Social media, gambling, and "freemium game" sites/apps all qualify as worse than LLM-based offerings in the opinions of many. Not to mention the addictiveness of their use on smartphones.
However, the above are relative quantifications and in no way exonerate LLM offerings.
In other words, it doesn't matter how much poop is atop an otherwise desirable sandwich. It is still a poop sandwich.
The machine flattery is a big turn off for me.
No, my simple and obvious statement was not "a deep and insightful point". No I am not "in the top 1% of people who can recognize this".
The other thing that drives me crazy is the constant positive re-framing with bold letters. "You aren't lazy, you are just *re-calibrating*! A wise move on your part!".
I don't find it ego stroking at all. It's obviously fake and patently stupid and that verbiage just mucks up the conversation.
The sycophancy is noticeably worse with 4o, the default model when you are not subscribed. My theory is that is on purpose to lure emotionally vulnerable users into paid subscriptions.
If you have a subscription, I genuinely haven't found any reason to use 4o since o4-mini-high became available.
(Every time I write out these model names I realize, again, how absurdly confusing they must be to casual users..)
Confusing indeed. Didn’t anyone in their marketing point out that maybe “4o” and “o4” are way too similar?
The flattery is also a turn off for me, yet I am not ignorant to the fact that even insincere flattery can be pleasurable. The voice model is even better at flattery - it actually sounds sincere!
Are the words different in the voice model? The text flattery I see can't sound sincere when spoke.
Yes, 4o can be quite wordy, but the voice chat model is much more brief. The voice chat model also is more than just text-to-speech; it correctly uses intonation to signify meaning.
I feel exactly the same, but I believe this is not universal. I see a similarity with the repulsion I feel when someone is being nice to me because of a job (or more generally, when someone address me "as a customer"). Not everyone react the same, and many people, despite of being perfectly aware that the attention they get is purely calculated, are totally fine with that. It's just fare game to them. I would not be surprised if the same applied to IA obsequiosity: "yes of course it's flattery, would you prefer to be insulted?" would probably be their answer to that dilemna.
There is something between flattery and insults. Just stick to the facts.
Some of the o4 variants do stick to the facts. They are quite annoying sometimes, because they resist correction. It's almost as if admitting wrong requires empathy.
> It's almost as if admitting wrong requires empathy.
It really doesn't. I don't know if I've used o4. But sticking to the facts is exactly about trying to get to the truth, not digging in to a position. New evidence can create new conclusions.
> in the top 1% of people who can recognize this
I've never had an AI respond to me with this kind of phrasing. General psychophancy, sure, but nothing that obnoxious. I haven't used ChatGPT much in the last year though, does it speak that way?
> psychophancy
Sycophancy. I don't usually correct misspellings, but this one is pretty unique.
But I like it as GP used it. It's like a portmanteau of psychotic sycophancy
Chatgpt was doing this for a few weeks, several months ago, until they fixed it.