Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anecdotally, I find internal monologues often nonsense.

I once asked it about why a rabbit on my lawn liked to stay in the same spot.

One of the internal monologues was:

> I'm noticing a fluffy new resident has taken a keen interest in my lawn. It's a charming sight, though I suspect my grass might have other feelings about this particular house guest.

It obviously can’t see the rabbit on my lawn. Nor can it be charmed by it.



It’s just doing exactly what it’s designed to do. Generate text that’s consistent with its prompts.

People often seem to get confused by all the anthropomorphizing that’s done about these models. The text it outputs that’s called “thinking” is not thinking, it’s text that’s output in response to system prompts, just like any other text generated by a model.

That text can help the model reach a better result because it becomes part of the prompt, giving it more to go on, essentially.

In that sense, it’s a bit like a human thinking aloud, but crucially it’s not based on the model’s “experience” as your example shows, it’s based on what the model statistically predicts a human might say under those circumstances.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: