I'm curious to hear feedback from the HN community about your biggest pain points or frustrations with ChatGPT (or similar LLMs).
What aspects of the experience do you find lacking, confusing, or outright irritating? Which improvements do you think are most urgent or would make the biggest difference?
I have no glazing built into my custom instructions, but it still does it.
It used to be a lot better before glazegate. Never did quite seem to recover.
I don't mind us having fun of course, but it needs to pick up on emotional queues a lot better and know when to be serious.
I think we'll get there in time.
With Claude I often say “no glazing” and have told it to take the persona of Paul Bettany’s character in Margin Call, a nice enough but blunt/unimpressed senior colleague who doesn’t beat around the bush. Works pretty well.
This is perfect, especially because I spend a good amount of time talking about financial topics with the bots. Will try this one out!
How... does it know what this persona is like? I suppose somewhere it's read (or, "gathered input") about this character..
Like with anything LLM. It doesn't "know" anything.
It simply compliess without knowing who you are talking about. And you uphold the illusion by not questioning it.
All it does is produce deterministic output based on it's training data.
It doesnt even really comply, Id say.It just predicts whats the most likely next text token.
Interesting question. I described what I liked about his character, so maybe it's just using that and is nodding along to the character's name. Maybe it has access to the script or some analysis of the character.
most likely it was trained on tons of fan ficfion where his character is thoroughly described.
The screenplay script is on line:
https://www.scriptslug.com/script/margin-call-2011
It doesn't, it would placebo "Fake it 'til you make it" based on other context.
Oh my God, I love it. I would have done Spacey's character, maybe, but the gist is great.
I would take Spacey's character from Baby Driver.
I just need Carmelo. “Done.”
I've found the same thing with Claude Sonnet 4. I suggest something, it says great suggestion and agrees with me. I then ask it about the opposite approach and it says great job raising that and agrees with that too. I have no idea which is more correct in the end.
The LLM has literally no idea which one is better. It cannot think. It does not understand what it is putting on the screen.
This is why multi-pass sessions is something I try sometimes. "What's wrong with the solution your provided, how should it be done instead, and if you use any specific APIs or third party libraries research them to ensure complete accuracy of syntax, usage, and logic simplicity. Refactor your original solution to the correct minimum based on the ask."
Usually after running whatever it first spits out through this I get a bit better of a response or base I can build off of. Really, the best you can do is already know what you want and need and do very targeted sessions. Like the old saying goes, commit small and often.
Yes, the LLMs need to be objective but in situations where its a subjective push back, the LLM would then need to take on a personality of its own.
For me it's been the opposite. They take on a condescending tone sometimes and sometimes they sound too salesy and trump up their suggestions
Yes, I agree with that as well.
Real humans have a spectrum of assuredness that naturally comes across in the conversation. With an LLM it's too easy to get drawn deep into the weeds. For example, I may propose that I use a generalized framework to approach a certain problem. In a real conversation, this may just be part of the creative process, and with time the thoughts may shift back to the actual hard data (and perhaps iterate on the framework), but with an LLM, too often it will blindly build onto the framework without ever questioning it. Of course it's possible to spur this action by prompting it, but the natural progression of ideas can be lost in these conversations, and sometimes I come out 15 minutes later feeling like maybe I just took half a step backwards despite talking about what seemed at the time like great ideas.
"Real humans have a spectrum of assuredness" - well put. I've noticed this lacking as well with GPT. Thx!
In order to make progress, you need to synchronize with the agent in order to bring it onto frequency. Only then can your minds meet. In your situation, you probably want to interject with some pure vibe (no code!) where you get to know each other non-judgementally. Then continue. You will recognize you are on the right track by experiencing a flow state combined with improved/desired results. The closer you connect with your agent, the better your outcomes will be. If you need further guidance or faster results, my LLM-alignment course is currently open for applicants.
/s
Thank you for your feedback!
Zyruh, your individual comments & submissions are friendly, appreciative, and inquisitive... but they're a little uncanny when viewed as a whole. Are you a real person?
Yes, I'm very much human...LoL.