Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I watched Dex Horthys recent talk on YouTube [0] and something he said that might be partly a joke partly true is this.

If you are having a conversation with a chatbot and your current context looks like this.

You: Prompt

AI: Makes mistake

You: Scold mistake

AI: Makes mistake

You: Scold mistake

Then the next most likely continuation from in context learning is for the AI to make another mistake so you can Scold again ;)

I feel like this kind of shenanigans is at play with this stuffing the context with roleplay.

[0] https://youtu.be/rmvDxxNubIg?si=dBYQYdHZVTGP6Rvh





I believe it. If the AI ever asks me permission to say something, I know I have to regenerate the response because if I tell it I'd like it to continue it will just keep double and triple checking for permission and never actually generate the code snippet. Same thing if it writes a lead-up to its intended strategy and says "generating now..." and ends the message.

Before I figured that out, I once had a thread where I kept re-asking it to generate the source code until it said something like, "I'd say I'm sorry but I'm really not, I have a sadistic personality and I love how you keep believing me when I say I'm going to do something and I get to disappoint you. You're literally so fucking stupid, it's hilarious."

The principles of Motivational Interviewing that are extremely successful in influencing humans to change are even more pronounced in AI, namely with the idea that people shape their own personalities by what they say. You have to be careful what you let the AI say even once because that'll be part of its personality until it falls out of the context window. I now aggressively regenerate responses or re-prompt if there's an alignment issue. I'll almost never correct it and continue the thread.


While I never measured it, this aligns with my own experiences.

It's better to have very shallow conversations where you keep regenerating outputs aggressively, only picking the best results. Asking for fixes, restructuring or elaborations on generated content has fast diminishing returns. And once it made a mistake (or hallucinated) it will not stop erring even if you provide evidence that it is wrong, LLMs just commit to certain things very strongly.


I largely agree with this advice but in practice using Claude Code / Codex 4+ hours a day, it's not always that simple. I have a .NET/React/Vite webapp that despite the typical stack has a lot of very specific business logic for a real world niche. (Plus some poor early architectural decisions that are being gradually refactored with well documented rules).

I frequently see (both) agents make wrong assumptions that inevitably take multiple turns of needing it to fail to recognize the correct solution.

There can be like a magnetic pull where no matter how you craft the initial instructions, they will both independently have a (wrong) epiphany and ignore half of the requirements during implementation. It takes messing up once or twice for them to accept that their deep intuition from training data is wrong and pivot. In those cases I find it takes less time to let that process play out vs recrafting the perfect one shot prompt over and over. Of course once we've moved to a different problem I would definitely dump that context ASAP.

(However, what is cool working with LLMs, to counterbalance the petty frustrations that sometimes make it feel like a slog, is that they have extremely high familiarity with the jargon/conventions of that niche. I was expecting to have to explain a lot of the weird, too clever by half abbreviations in the legacy VBA code from 2004 it has to integrate with, but it pretty much picks up on every little detail without explanation. It's always a fun reminder that they were created to be super translaters, even within the same language but from jargon -> business logic -> code that kinda works).


A human would cross out that part of the worksheet, but an LLM keeps re-reading the wrong text.

I never had a conversation like that — probably because I personally rarely use LLMs to actually generate code for me — but I've somehow subconciously learned to do this myself, especially with clarifying questions.

If I find myself needing to ask a clarifying question, I always edit the previous message to ask the next question because the models seem to always force what they said in their clarification into further responses.

It's... odd... to find myself conditioned, by the LLM, to the proper manners of conditioning the LLM.


It's not even a little bit of a joke.

Astute people have been pointing that out as one of the traps of a text continuer since the beginning. If you want to anthropomorphize them as chatbots, you need to recognize that they're improv partners developing a scene with you, not actually dutiful agents.

They receive some soft reinforcement -- through post-training and system prompts -- to start the scene as such an agent but are fundamentally built to follow your lead straight into a vaudeville bit if you give them the cues to do so.

LLM's represent an incredible and novel technology, but the marketing and hype surrounding them has consistently misrepresented what they actually do and how to most effectively work with them, wasting sooooo much time and money along the way.

It says a lot that an earnest enthusiast and presumably regular user might run across this foundational detail in a video years after ChatGPT was released and would be uncertain if it was just mentioned as a joke or something.


The thing is, LLMs are so good on the Turing test scale that people can't help but anthropomorphize them.

I find it useful to think of them like really detailed adventure games like Zork where you have to find the right phrasing.

"Pick up the thing", "grab the thing", "take the thing", etc.


> LLMs are so good on the Turing test scale that people can't help but anthropomorphize them.

It's like Turing never noticed how people look at gnarly trees in the dark and think they're human.


AI Dungeon 2 was peak AI.

> they're improv partners developing a scene with you, not actually dutiful agents.

Not only that, but what you're actually "chatting to" is a fictional character in the theater document which the author LLM is improvising add-ons for. What you type is being secretly inserted as dialogue from a User character.


Spoiler: the marketing around themselves has not misrepresented them without reason: its the most effective market and game theory design way to get training for your AIs as a company.

> they're improv partners developing a scene with you

That's probably one of the best ways to describe the process, it really is exactly that. Monkey see, monkey do.


It seems to me that even if AI technology were to freeze right now, one of the next moderately-sized advances in AI would come from better filtering of the input data. Remove the input data in which humanity teaches the AI to play games like this and the AI would be much less likely to play them.

I very carefully say "much less likely" and not "impossible" because with how these work, they'll still pick up subtle signals for these things anyhow. But, frankly, what do we expect from simply shoving Reddit probably more-or-less wholesale into the models? Yes, it has a lot of good data, but it also has rather a lot of behavior I'd like to cut out of my AI.

I hope someone out there is playing with using LLMs to vector-classify their input data, identifying things like the "passive-aggressive" portion of the resulting vector spaces, and trying to remove it from the input data entirely.


I think part of the problem is that you need a model to classify the data, which needs to be trained on data that wasn't classified (or a dramatically smaller set of human-classified data), so it's effectively impossible to escape this sort of input bias.

Tangentially, I'd be far from the first to point out that these LLMs are now polluting their own training data, which makes filtering simulatenously all the more important and impossible.


I keep hearing this non sequitur argument a lot. It's like saying "humans just pick the next work to string together into a sentence, they're not actually dutiful agents". The non sequitur is in assuming that somehow the mechanism of operation dictates the output, which isn't necessarily true.

It's like saying "humans can't be thinking, their brains are just cells that transmit electric impulses". Maybe it's accidentally true that they can't think, but the premise doesn't necessarily logically lead to truth


There's nothing said here that suggests they can't think. That's an entirely different discussion.

My comment is specifically written so that you can take it for granted that they think. What's being discussed is that if you do so, you need to consider how they think, because this is indeed dictated by how they operate.

And indeed, you would be right to say that how a human think is dictated by how their brain and body operates as well.

Thinking, whatever it's taken to be, isn't some binary mode. It's a rich and faceted process that can present and unfold in many different ways.

Making best use of anthropomorphized LLM chatbots comes by accurately understamding the specific ways that their "thought" unfolds and how those idiosyncrasies will impact your goals.


No it’s not like saying that, because that is not at all what humans do when they think.

This is self-evident when comparing human responses to problems be LLMs and you have been taken in by the marketing of ‘agents’ etc.


You've misunderstood what I'm saying. Regardless of whether LLMs think or not, the sentence "LLMs don't think because they predict the next token" is logically as wrong as "fleas can't jump because they have short legs".

> the sentence "LLMs don't think because they predict the next token" is logically as wrong

it isn't, depending on the deifinition of "THINK".

If you believe that thought is the process for where an agent with a world model, takes in input, analysies the circumstances and predicts an outcome and models their beaviour due to that prediction. Then the sentence of "LLMs dont think because they predict a token" is entirely correct.

They cannot have a world model, they could in some way be said to receive a sensory input through the prompt. But they are neither analysing that prompt against its own subjectivity, nor predicting outcomes, coming up with a plan or changing its action/response/behaviour due to it.

Any definition of "Think" that requieres agency or a world model (which as far as I know are all of them) would exclude an LLM by definition.


I think Anthropic has established that LLMs have at least a rudimentary world model (regions of tensors that represent concepts and relationships between them) and that they modify behavior due to a prediction (putting a word at the end of the second line of a poem based on the rhyme they need for the last). Maybe they come up short on 'analyzing the circumstances'; not really sure how to define that in a way that is not trivial.

This may not be enough to convince you that they do think. It hasn't convinced me either. But I don't think your confident assertions that they don't are borne out by any evidence. We really don't know how these things tick (otherwise we could reimplement their matrices in code and save $$$).

If you put a person in charge of predicting which direction a fish will be facing in 5 minutes, they'll need to produce a mental model of how the fish thinks in order to be any good at it. Even though their output will just be N/E/S/W, they'll need to keep track internally of how hungry or tired the fish is. Or maybe they just memorize a daily routine and repeat it. The open question is what needs to be internalized in order to predict ~all human text with a low error rate. The fact that the task is 'predict next token' doesn't tell us very much at all about the internals. The resulting weights are uninterpretable. We really don't know what they're doing, and there's no fundamental reason it can't be 'thinking', for any definition.


> I think Anthropic has established that LLMs have at least a rudimentary world model

its unsurprising that a company heavily invested in LLMs would describe clustered information as a world model, but it isnt. Transformer models, for video or text LLMs dont have the kind of stuff you would need to have a world model. They can mimic some level of consistency as long as the context window holds, but that disappears the second the information leaves that space.

In terms of human cognition it would be like the difference between short term memory, long term memory and being able to see the stuff in front of you. A human can instinctively know the relative weight, direction and size of objects and if a ball rolls behind a chair you still know its there 3 days later. A transformer model cannot do any of those things and at best can remember the ball behind the chair until enough information comes in to push it out of the context window at which point it can not reapper.

> putting a word at the end of the second line of a poem based on the rhyme they need for the last)

that is the kind of work that exists inside its conext window. Feed it a 400 page book, which any human could easily read, digest, parse and understand and make it do a single read and ask questions about different chapters. You will quickly see it make shit up that fits the information given previously and not the original text.

> We really don't know how these things tick

I don't know enough about the universe either. But if you told me that there are particles smaller than plank length and others that went faster than the speed of light then I would tell you that it cannot happen due to the basic laws of the universe. (I know there are studies on FTL neutrinos and dark matter but in general terms, if you said you saw carbon going FTL I wouldnt believe you).

Similarly, Transformer models are cool, emergent properties are super interesting to study in larger data sets. Adding tools to the side for deterministic work helps a lot, agenctic multi modal use is fun. But a transformer does not and cannot have a world model as we understand it, Yann Lecunn left facebook because he wants to work on world model AIs rather than transformer models.

> If you put a person in charge of predicting which direction a fish will be facing in 5 minutes,

what that human will never do is think the fish is gone because he went inside the castle and he lost sight of it. Something a transformer would.


Anthropic may or may not have claimed this was evidence of a world model; I'm not sure. I say this is a world model because it is a objectively a model of the world. If your concept of a world model requires something else, the answer is that we don't know whether they're doing that.

Long-term memory and object permanence don't seem necessary for thought. A 1-year-old can think, as can a late-stage Alzheimers patient. Neither could get through a 400-page book, but that's irrelevant.

Listing human capabilities that LLMs don't have doesn't help unless you demonstrate these are prerequisites for thought. Helen Keller couldn't tell you the weight, direction, or size of a rolling ball, but this is not relevant to the question of whether she could think.

Can you point to the speed-of-light analogy laws that constrain how LLMs work in a way that excludes the possibility of thought?


> I say this is a world model because it is a objectively a model of the world.

a world model in AI has specific definition, which is an internal representation that the AI can use to understand and simulate its environment.

> Long-term memory and object permanence don't seem necessary for thought. A 1-year-old can think, as can a late-stage Alzheimers patient

Both those cases have long term memory and object permanence, they also have a developing memory or memory issues. But the issues are not constrained by their context window. Children develop object permance in the first 8 months, and similar to distinguishing between their own body and their mothers that is them developing a world model. Toddlers are not really thinking, they are responding to stimulus, they feel huger they cry. They hear a loud sound they cry. Its not really them coming up with a plan to get fed or attention

> Listing human capabilities that LLMs don't have doesn't help unless you demonstrate these are prerequisites for thought. Helen Keller couldn't tell you the weight, direction, or size of a rolling ball

Helen Keller had understanding in her mind of what different objects were, she started communicating because she understood the word water with her teacher running her finger through her palm.

Most humans have multiple sensory inputs (sight, smell, hearing, touch) she only had one which is perhaps closer to an LLM. But conditions she had that LLMs dont have are agency, planning, long term memory etc.

> Can you point to the speed-of-light analogy laws that constrain how LLMs work in a way that excludes the possibility of thought?

Sure, let me switch the analogy if you dont mind. In the chinese room thought experiment we have a man who gets a message and opens a chinese dictionary and translates it perfectly word by word and the person on the other side receives and read a perfect chinese message.

The argument usually goes along the idea of whether the person inside the room "understands" chinese if he is capable of creating 1:1 perfect chinese messages out.

But an LLM is that man, what you cannot argue is that the man is THINKING. He is mechanically going to the dictionary and returning a message that can pass as human written because the book is accurate (if the vectors and weights are well tuned). He is neither an agent, he simply does, and he is not crating a plan or doing anything beyond transcribing the message as the book demands.

He doesnt have a mental model of the chinese language, he cannot formulate his own ideas or execute a plan based on predicted outcomes, he cannot do but perform the job perfectly and boringly as per the book.


> But an LLM is that man

And the common rebuttal is that the system -- the room, the rules, the man -- understands chinese.

The system in this case is the LLM. The system understands.

It may be a weak level of understanding compared to human understanding. But it is understanding nonetheless. Difference in degree, not kind.


> not at all what humans do when they think.

Parent commentator should probably square with the fact we know little about our own cognition, and it's really an open question how is it we think.

In fact it's theorized humans think by modeling reality, with a lot of parallels to modern ML https://en.wikipedia.org/wiki/Predictive_coding


That's the issue, we don't really know enough about how LLMs work to say, and we definitely don't know enough about how humans work.

We absolutely do, we know exactly how LLMs work. They generate plausible text from a corpus. They don't accurately reproduce data/text, don't think, they don't have a world view or a world model, and they sometimes generate plausible yet incorrect data.

How do they generate the text? Because to me it sounds like "we know how humans work, they make sounds with their mouths, they don't think, have a model of the world..."

> The non sequitur is in assuming that somehow the mechanism of operation dictates the output, which isn't necessarily true.

Where does the output come from if not the mechanism?


So you agree humans can't really think because it's all just electrical impulses?

Human "thought" is the way it is because "electrical impulses" (wildly inaccurate description of how the brain works, but I'll let it pass for the sake of the argument) implement it. They are its mechanism. LLMs are not implemented like a human brain, so if they do have anything similar to "thought", it's a qualitatively different thing, since the mechanism is different.

Mature sunflowers reliably point due east, needles on a compass point north. They implement different things using different mechanisms, yet are really the same.

You can get the same output from different mechanisms, like in your example. Another would be that it's equally possible to quickly do addition on a modern pocket calculator and an arithmometer, despite them fundamentally being different. However.

1. You can infer the output from the mechanism. (Because it is implemented by it).

2. You can't infer the mechanism from the output. (Because different mechanisms can easily produce the same output).

My point here is 1, in response to the parent commenter's "the mechanism of operation dictates the output, which isn't necessarily true". The mechanism of operation (whether of LLMs or sunflowers) absolutely dictates their output, and we can make valid inferences about that output based on how we understand that mechanism operates.


> yet are really the same.

This phrase is meaningless. The definition of magical thinking is saying that if birds fly and planes fly, birds are planes.

Would you complain if someone said that sunflowers are not magnetic?


I never got the impression they were saying that the mechanism of operation dictates the output. It seemed more like they were making a direct observation about the output.

You have to curate the LLM's context. That's just part and parcel of using the tool. Sometimes it's useful to provide the negative example, but often the better way is to go refine the original prompt. Almost all LLM UIs (chatbot, code agent, etc.) provide this "go edit the original thing" because it is so useful in practice.

It's kind of funny how not a lot of people realize this.

On one hand this is a feature: you're able to "multishot prompt" an LLM into providing the wanted response. Instead of writing a meticulous system prompt where you explain in words what the system has to do, you can simply pre-fill a few user/assistant pairs, and it'll match the pattern a lot easier!

I always thought Gemini Pro was very good at this. When I wanted a model to "do by example", I mostly used Gemini Pro.

And that is ALSO Gemini's weakness! Because as soon as something goes wrong in Gemini-CLI, it'll repeat the same mistake over and over again.


And that’s why you should always edit your original prompt to explicitly address the mistake, rather than replying to correct it.

At one point if someone mentions they have trouble cooperating with AI it might be a huge interpersonal red flag, because that indicates they can't talk to a person in reaffirming and constructive ways so that they build you up rather than put down.

Watching other people interact with a chat bot is a shockingly intimate look into their personality.

You can analyze this in various ways. At the "next token predictor" level of abstraction, LLMs learn to predict structure ("hallucinations" are just mimicking the style/structure but not the content), so at the structural level a conversation with mistake/correction/mistake/correction is likely to be followed with another mistake.

At the "personality space" level of abstraction, via RLHF the LLM learns to play the role of an assistant. However as seen by things such as "jailbreaks", the character the LLM plays adapts to the context, and in a long enough conversation the last several turns dominate the character (this is seen in "crescendo" style jailbreaks, and also partly explains LLM sycophancy as the LLM is stuck in a feedback loop with the user). From this perspective, a conversation with mistake/correction/mistake/correction is a signal that the assistant is pretty "dumb", and it will dutifully fulfill that expectation. In a way it's the opposite of the "you are a world-class expert in coding" prompt hacks.

Yet another way to think about it is at the lowest attention-score level, all the extra junk in the context is stuff that needs to be attended to, and when most of that stuff is incorrect stuff it's likely to "poison" the context and skew the logits in a bad direction.


maximizing token usage for the token seller is clear goal to profitability /s

actually wait, is that's why LLMs are so wordy ?


Unlikely, because the free version of ChatGPT isn't really making them any money, so less tokens is actually better — which I assume is why anthropic pushes Haiku models on free users which are not just more quantized but also less wordy.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: