Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There’s a misunderstanding here—

I’m not claiming LLMs are sentient. I’m not claiming they are even similar.

What I am pushing back against is the confidence with which people so blatantly claim we are dissimilar.

It’s an important distinction, and I’ve yet to see solid evidence to suggest it’s a point we shouldn’t even explore.

What I see so often is comments stating things like, “an LLM is just pattern matching” or “it’s a prediction machine”.

And I’m not arguing that’s not true; what I’m arguing is how can anyone say a human is inherently different ?

I admit that I’m taken with these latest advancements. It’s why yes, you see my tech-bro crazy-person comments in these threads a lot! But I’m genuinely fascinated by this stuff.

I have had a huge interest in the human mind for a decade. I’ve read the works of folks like Anil Seth and others who work in cognitive and computational science, and I’m increasingly intrigued by the things we might learn about our own selves via these technological advancements.

Again, I’m not claiming sentience for an LLM, or anything that says “we are the same as an LLM.” I’m simply trying to argue that our own minds are a black box, as are how an LLM arrives at an output; it’s the confidence of folks to claim they know what’s inside either that I’m pushing back against.



Thanks for the reply; I also saw your other reply to a similar comment after I'd posted mine and your position seems more reasonable than I first understood it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: