Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting topic but a lot of the articles on this blog reads like undeclared LLM slop:

> This is also why I believe that language is a bottleneck for thought. Most of what you remember is nothing like an approximate copy of the things you experienced in real life—even in the specific case of text, memory is not even remotely like a paraphrase of previously read words. Many of our thoughts happen in a highly abstracted and distilled form, interacting and connecting with each other as a network that simply cannot be faithfully converted into a sequence of words, however long. The fact that people can fail even at something as basic as sketching a kanji or a vehicle they've seen hundreds of times before is just another example of the same phenomenon.

A pet peeve of mine is when someone uses their personal datapoints to generalize for humanity. Every sentence here should say “for me”.

What they said doesn’t even support the idea that language is a bottle-neck for thought, it actually argues against it. If language can’t capture the complexities of thought, then that’s the opposite point as language is a bottle-neck for thought.

I read the linked article https://aethermug.com/posts/the-beautiful-dissociation-of-th... and it clearly is comprised of a lot of LLM slop.

“In techie terms, the Chinese script doesn't support the structure of languages like English and Japanese. It doesn't have what it takes.”

That’s not techie that LLM slop

“Sometimes, instead of using them for their meaning, they used them for (gasp!) their pronunciation. By ignoring the original content of a kanji, they could string them together to form almost any sound.”

I’m becoming allergic (gasp!) to this kind of writing



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: