Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The reason I called out children who gain vision late is since I think people might dismiss babies as just taking awhile for their brains to be fully formed the same way it takes awhile for their skulls to fuse.

> But, they still don't require millions of training data. At 3 months in toddlers, with a training set restricted to only their immediate family, can reliably differentiate between faces and tables in different light, with different expressions/positions without needing to first process millions of faces, tables and other objects.

In a single day I'm exposed to maybe 50 times the number of images resnet trained on. Humans are bathed in a lot of data and what BERT (and probably earlier models I don't know about) and now GPT have taught us is that unlabeled uncurated data is worth more than we originally considered. I think it's probably right that humans are more sample efficient than AI for now, but I think you're doing the same thing I was critiquing above where you narrow the "training data" to only what seems important, when really an infant or adult human receives a bunch more

> There's literally zero experience, all there is in that brain, is instincts, not knowledge.

Sorry this is meant to say the brains are the result of millions of years and those millions of years were filled with lifetimes not the brains. Though I think this might be a distinction without a difference. Babies are born with a crude swimming reflex. Obviously it's wrong to say that they themselves have experienced swimming but I'm not sure it's wrong to say that their DNA has and this swimming reflex is one fo the scars that prove it.

> We are already using millions of times more resources than humans to get a worse result, why would using 100x more resources than we are currently using make a big difference

I think it's fairer to say we use around 200k times and that's probably a vast over estimate. It's based on 480 hours to reach fluency in a foreign language and multiples that by 60 * 100 to try to approximate the humber of words you would read. There are probably mistakes in both directions for this estimate. On one hand no one starting out at a language is reading at 100 words a minute, but on the other hand they are getting direct feedback from someone. If I were to guess if we could accurately estimate it would be closer to 20k or even a 2k difference, but regardless why do you assume needing more resources means it can't scale? There is some evidence for that. We've seen diminishing returns and there just isn't another 100X of text data around.

Overall I think it's probably right we won't hit human level AI in the next 60 years and certainly not with current architecture, but I think some of the motivation for this skepticism is the desire for there to be some magic spark that explains intelligence and since we can sort of look inside the brain of chat gpt and see it's all clockwork and worse than that statistical clockwork we pull back and deny that it could possibly be responsible for what we see in humans ignoring that we too are statistical clockwork. So, I think it's unlikely but far from impossible and we should continue scaling up current approaches until we really start hitting diminishing returns



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: