Hacker Newsnew | past | comments | ask | show | jobs | submit | suddenlybananas's commentslogin

People can claim whatever they like. That doesn't mean it's a good or reasonable hypothesis (especially for one that is essentially unfalsifible like predictive coding).

The problem is that we don’t have a good understanding of what “thinking” really is, and those parts of it we think we do understand involve simple things done at scale (electrical pulses on specific pathways, etc).

It is not unreasonable to suspect differences between humans and LLMs are differences in degree, rather than category.


I'm not trying to advance a testable hypothesis. If you think the unfalsifiability of my claim is a problem, you haven't understood what I'm trying to do.

My claim is that the two concepts are indistinguishable, thus equivalent. The unfalsifiability is what makes it a natural equivalence, the same as in the other examples I gave.


IMHO, you should. The opponent does not have an alternative definition of thinking that would have a prediction power matching the token prediction. Whatever they are thinking thinking is is a strictly worse scientific theory.

The crucial difference is that we know the etiology of COVID and so are justified in treating those two people as having the same disease. Autism is much more complicated because we don't have a thing to define it other than a bunch of disparate symptoms.

It might turn out like if we treated the cold, COVID, tuberculosis and lung cancer as the same thing because they all involve coughing.


We know that autism has a strong genetic link.

Furthermore we employ differential diagnostic and check whether your symptoms could be better explained by another condition. You don't just diagnose people with autism because they have a few symptoms.

Furthermore autistic people can generally relate to each other. Even if two autistic people show very different symptoms there is often a feeling of belonging together.

It is always possible that we will learn more in the future and maybe we will have other diagnosis criteria or discover some people currently diagnosed under autistism would fit better under something else.

However the current diagnostic criteria for ASD is the current state of our scientific knowledge. A lot of clinical research is baked into it.


> It might turn out like if we treated the cold, COVID, tuberculosis and lung cancer as the same thing because they all involve coughing.

Well there are 200 different viruses that cause the common cold. We lump them together because they all involve coughing.

Basically all cancers are unique, even for the same type of cancer. Again, these are lumped due to shared symptoms.

Tuberculosis is caused by 9 different species of bacteria, but these are at least related species.

Covid is basically exactly the same disease as SARS, just caused by a particular strain of coronavirus, though that strain divereged into multiple ones that now produce quite different symptoms. In addition to SARS, other coronaviruses are among those 200 that cause the common cold.

Diseases are historical groupings that someone at some point thought would be useful, nothing more.


You can do a lot with little, it just requires investing more in development which understandably most companies are uninterested in. Besides, plenty of websites are bloated as all hell. Why does a newspaper website, for example, have to be very much more than plain html?

Newspaper websites are a good example of bloat, true. I think if you’re in the business of primarily serving text content and not doing much interactive stuff, you don’t need a heavy site. A lot of them tend to cram their websites with trackers and ads and I guess that’s a business thing.

Tbh, it’s unpopular around HN, but I felt like AMP was a great experience for users. AMP pages were super fast and had no annoying banners - and none of my pet peeve: layout shift.


Why on earth wouldn't it be interesting? Do you only care about your own life?

Shhhhh no one cares about data contamination anymore.

Then write something down yourself and upload a picture to gemini.google.com or chatgpt. Hell, combine it. Make yourself a quick math test, print it, solve with pen and ask these models to correct it.

They're very good at it.


I don't know how to write like a 19th century mathematician, nor anyone earlier. I'm not sure OCR on Carolingian Miniscule has been solved, let alone more ancient styles like Roman cursive or, god forbid, things like cuneiform. Especially since the corpora on these styles is so small, dataset contamination /is/ a major issue!

For that to be relevant to this post, they would need to write with secretary hand.

I'm not sure that we can say that feelings are learned.

When you get burned, you learn to fear fire.

Sure, humans come with some baked in weights, but others are learned.


I think the associations are learned but not the feelings are learned.

Like some people feel great joy when an American flag burns while others feel upset.

If you accidentally delete a friends hard drive you'll feel sad but if you were intentionally sabotaging a company you'll feel proud at the success.

i.e. joy and happiness are innate not learned.


See how clinical socio- and psychopaths behave. They only emulate feelings (particularly when it's convenient for them) but they don't have the capacity to feel in their brain. The same is true for LLMs.

This is a hollywood level pop-science view. Real people are vastly more complicated and do in fact feel things, even if it's not a "normal" way.

They obviously have the puzzles in the training data, why are you acting like this is uncertain?

>Learning a second language let me notice how much of language has no content.

What on earth do you mean?


I see what you did there. :)

I don't know how Federenko squares this view with her own work which directly contradicts it [1]. In this work, they find that the language network activated for "meaningful" non-linguistic stimuli such as the sounds of someone getting ready in the morning (e.g. yawning, brushing teeth, etc.). It seems entirely contrary to her arguments in this article and she doesn't even acknowledge it.

[1] https://direct.mit.edu/nol/article/5/2/385/119141


If teachers made as much as half the people on this site, perhaps things would be better. 90k in San Ramon is more or less the median wage [1]. It's not _that_ much money.

[1] https://en.wikipedia.org/wiki/San_Ramon,_California#2020_cen...


Who knows? Maybe with the way AI is going that will be considered a lot of money compared to what people earn on this site.

As in what people generally earn on this site will crash way down and be outsourced to these models. I'm already seeing it personally from a social perspective - as a SWE most people I know (inc teachers in my circle) look at me like my days are numbered "cause of AI".


When I said government employees make above market, I didn't mean for the general area average.. I meant for the work they do.

Should a city landscape truck driver make $250k because his truck drives around a rich town? No, he should make what other people in this kind of industry make.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: