Hacker Newsnew | past | comments | ask | show | jobs | submit | jibal's commentslogin

I think it's a beautiful and educational article. Dismissing it because you used LLMs to help write it isn't rational.

insane minxes

Chess geometry is not the same as physical geometry. See, e.g., https://en.wikipedia.org/wiki/R%C3%A9ti_endgame_study

Indeed, it's not even the same between pieces!

Kings have Chebysev geometry while Rooks have taxicab geometry: https://en.wikipedia.org/wiki/Taxicab_geometry#See_also

It's left as an exercise for the reader to figure out the geometry of the remaining pieces.


Rooks don't have taxicab geometry. Their metric space is compact even on an infinite board. I think you're thinking of the wazir: https://en.wikipedia.org/wiki/Wazir_(chess)

https://en.wikipedia.org/wiki/Chebyshev_distance for kings, but on a clear board the rook distance between any two squares is 1 or 2, whereas with Taxicab it could be as much as 14.

facetious

[I won't bother responding to the rest of your appalling comment]


Indeed.

"Closed as per WP:SNOW. There is no indication whatsoever that there is consensus to change the status of the BBC as a generally reliable source, neither based on the above discussion nor based on this RfC".

Wikipedians know a troll when they see one.


That's a fallacy of denial of the antecedent. You are inferring from the fact that airplanes really fly that AIs really think, but it's not a logically valid inference.

"Observing a common (potential) failure mode"

That's not what we have here.

"It is only a fallacy if you "P, therefore C" which GP is not (at least to my eye) doing."

Some people are willfully blind.


Observing a common (potential) failure mode is not equivalent to asserting a logical inference. It is only a fallacy if you "P, therefore C" which GP is not (at least to my eye) doing.

Yeah at that point, just arguing semantics

Thermometers and human brains are both mechanisms. Why would one be capable of measuring temperature and other capable of learning abstract thought?

> If it turns out that LLMs don't model human brains well enough to qualify as "learning abstract thought" the way humans do, some future technology will do so. Human brains aren't magic, special or different.

Google "strawman".


Internal monologue is a like a war correspondent's report of the daily battle. The journalist didn't plan or fight the battle, they just provided an after-the-fact description. Likewise the brain's thinking--a highly parallelized process involving billions of neurons--is not done with words.

Play a little game of "what word will I think of next?" ... just let it happen. Those word choices are fed to the monologue, they aren't a product of it.


What does that have to do with the claim? It is very unlikely that 38% of Stanford students are actually disabled, and your success has nothing whatsoever to do with that.

Indeed. It's bizarre that some people attempt to rationalize such things.

It’s probably motivated reasoning.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: