On the other hand, a large part of the complexity of human hardware randomly evolved for survival and only recently started playing around in the higher-order intellect game. It could be that we don't need so many neurons just for playing intellectual games in an environment with no natural selection pressure.
Evolution is winning because it's operating at a much lower scale than we are and needs less energy to achieve anything. Coincidentally, our own progress has also been tied to the rate of shrinking of our toys.
Evolution has won so far because it had a four billion year head start. In two hundred years, technology has gone from "this multi-ton machine can do arithmetic operations on large numbers several times faster than a person" to "this box produces a convincing facsimile of human conversation, but it only emulates a trillion neurons and they're not nearly as sophisticated as real ones."
I do think we probably need a new hardware approach to get to the human level, but it does seem like it will happen in a relative blink of an eye compared to how long the brain took.
But we don't even need a human brain. We already have those, they take months to grow, take forever to train, and are forever distracted. Our logic-based processes will keep getting smaller and less power hungry as we figure out how to implement them at even lower scales, and eventually we'll be able to solve problems with the same building blocks as evolution but in intelligent ways, of which LLMs will likely only play a minuscule part of the larger algorithms.
I think current LLMs are trying to poorly emulate several distinct systems.
They're not that great at knowledge (and we're currently wasting most of the neurons on memorizing common crawl, which... have you looked at common crawl?)
They're not that great at determinism (a good solution here is that the LLM writes 10 lines of Python, which then feed back into the LLM. Then the task completes 100% of the time, and much cheaper too).
They're not that great at complex rules (surprisingly good actually, but expensive and flakey). Often we are trying to simulate what are basically 50 lines of Prolog with a trillion params and 50KB of vague English prompts.
I think if we figure out what we're actually trying to do with these things, then we can actually do each of those things properly, and the whole thing is going to work a lot better.
>But we don't even need a human brain. We already have those, they take months to grow, take forever to train
This is a weird argument considering LLMs are composed of the output of countless hours of human brains. That makes LLMs, by definition, logarithmically worse at learning.
Not all artificial neurons are LLMs. Machine learning can be applied to any kind of large data set, not just human prose, and will start finding useful patterns before a human brain has time to learn how many fingers it has.
We use logic to design our technology, but evolution does it by literally shaking all the atoms into place, no design involved. Our brains were created randomly.
It appears logical, but it falls under the same threshold. We say it has function, which is logical, but its used for something we could or may not have planned for. That's evolution. The logic is a pretext.
Evolution is winning because it's operating at a much lower scale than we are and needs less energy to achieve anything. Coincidentally, our own progress has also been tied to the rate of shrinking of our toys.