Carmack says he's pursuing a different path to AGI, then goes straight to the guy at the center of the most saturated area of machine learning (deep learning)?
I would've hoped he'd be exploring weirder alternatives off the beaten path. I mean, neural networks might not even be necessary for AGI, but no one at OpenAI is going to tell Carmack that.
If you want to be off the beaten path, you have to know where the beaten path is.
Otherwise you may end up walking the ditch beside the beaten path. It is slow and difficult, but it won't get you anywhere new.
For example, you may try an approach that doesn't look like deep learning, but after a lot of work, realize that you actually reinvented deep learning, poorly. We call these things neurons, transformers, backpropagation, etc... but in the end, it is just maths. If you end up finding that your "alternative" ends up being very well suited to linear algebra and gradient descent, once you have found the right formulas, you may realize that they are equivalent to the ones used in traditional "deep learning" algorithms. It help to recognize this early and take advantage of all the work done before you.
Wouldn't it be fair to say that one has to know what the current path is and have some idea where it leads and what its issues are, before forging a new path?
I mean, any idiot can go off-trail and start blundering around in the weeds, and ultimately wind up tripping, falling, hitting their head on a rock, and drowning to death in a ditch. But actually finding a new, better, more efficient path probably involves at least some understanding of the status quo.
> probably involves at least some understanding of the status quo.
Oh man, you had me going with such a vivid metaphor. I was really hoping for a payoff in the end, but you abandoned it. The easy close would be "probably involves at least some understanding of the existing terrain" but I was optimistic for something less prosaic.
To walk a path no knowledge of the existing is needed. But to be able to claim it is new it is. Even more so to be able to claim that the new is better.
Bias and ignorance are two different things. No knowledge is ignorance. Bias is using knowledge to judge new knowledge. The goal isn’t to pursue things with raging ignorance but to pursue them with no bias and collecting knowledge without conclusion, then once you’re knowledgeable of what is there you can take off with raging ignorance in the direction no one has gone before. But you can’t do than holding bias any more than you can having ignorance of what directions have been gone before.
The most off the beaten path to AGI I heard through the grapevine is to not have artificial neural networks, as in algorithms involving matmul running on silicon, at all. But instead, going on the path of the laziest engineer is the best engineer, to rely on the fact that neurons, actual neurons from someone's brain, already "know" how to make efficient, good-enough, general learning architectures and therefore in order to obtain programmatic human-like intelligence one would 'simply'† have to implant them not in mice [1] but in an actual vat and 'simply' interface with the whatever a group of neurons can be called, a soma(?). Given this Brain-on-a-Chip architecture, we wouldn't have to stick GPUs in our cars to achieve self-driving, but even more wetware (and of course, ignore the occasional screams of dread as the wetware becomes aware of themselves and how condemned they are to an existence of left-right-accelerate-break).
It would have been interesting seeing someone like Carmack going in this direction, but from the little details he gave he seems less interested in cells and Kjeldahl flasks and more of the same type-a-type-a on the ol' QWERTY.
† 'simply' might involve multiple decades of research and Buffett knows how many billions
What a waste it would be to think you are pursuing a different path only to discover you spent a year reinventing something that you could have learned by reading papers for a few days.
> "I have been amazed at what we've found here," he told
them. "A few weeks ago I would not have believed, did not
believe, that records such as you have in your Memorabilia
could still be surviving from the fall of the last mighty
civilization. It is still hard to believe, but evidence forces us
to adopt the hypothesis that the documents are authentic.
> Their survival here is incredible enough; but even more
fantastic, to me, is the fact that they have gone unnoticed
during this century, until now. Lately there have been men
capable of appreciating their potential value– and not only
myself. What Thon Kaschler might have done with them
while he was alive!– even seventy years ago."
> The sea of monks' faces was alight with smiles upon
hearing so favorable a reaction to the Memorabilia from
one so gifted as the thon. Paulo wondered why they failed
to sense the faint undercurrent of resentment– or was it
suspicion?– in the speaker's tone. "Had I known of these
sources ten years ago," he was saying, "much of my work
in optics would have been unnecessary." Ahha! thought
the abbot, so that's it. Or at least part of it. He's finding
out that some of his discoveries are only rediscoveries, and
it leaves a bitter taste. But surely he must know that never
during his lifetime can he be more than a recoverer of lost
works; however brilliant, he can only do what others
before him had done. And so it would be, inevitably, until
the world became as highly developed as it had been
before the Flame Deluge.
That's like a constant cycle for me. The stuff that grows from it is the things that keep growing and sticking around and I don't find any other literature directly replacing it or enhancing it. When I do find things that replace a bunch of my work I'm thrilled because I don't have to do that now and I can focus my energy on the other threads. Every once in a while I get competitive and it hurts, but if I'm being honest if I find something that gets me that way I've got a special appreciation for that moment.
This is pretty much the same deal in biology as well. At calico, at verily, at CZI, even at Allen, same story - they say they will reinvent biology research and then go get the same narrow minded professors and CEOs who run the status quo and end up as one more of the same stuff.
Neuralink is the only place where this pattern seemed to break a bit but then seems like Elon came into his own path with trying to push for faster results and breaking basic ethics.
This criticism is coming from an "ethics group" that is literally funded by PETA and is frequently criticized by an actual, legitimate group: the American Medical Association. It's baseless garbage, and the hypocrisy is not lost on me that it's published on Fortune, which is owned by a billionaire whose majority wealth comes from Charoen Pokphand. This company is responsible for some of the worst factory farming conditions on the planet along with being accused by the Guardian of using slave labor on their shrimping boats - an accusation they later admitted to. Fortune in general is a shit publication with an axe to grind against Elon.
The amount of disdain academically inclined people express towards reductionist engineering-first paradigms is hilarious and depressing.
The denial of obviously fertile paradigm feels like such a useless self-defeating loss to indulge in an intellectual status game.
We could be all better off right now if connectionists were given DOE-grade supercomputers in the 90s, and were supplied with custom TPUs later in the 00s as their ideas were proven generally correct via rigorous experimentation on said DOE supercomputers. This didn't happen due to what amounts to academic bullying culture: https://en.wikipedia.org/wiki/Perceptrons_(book)
The sheer scale of cumulative losses we suffered (at least in part) due to this denial of the connectionism as a generally useful foundational field will be estimated somewhere in the astronomical powers of ten in the future, where the fruits of this technology will provide radically better lives for us and our descendants.
I see you have a knee-jerk reaction to hype and industry, and we are all fearing replacement unless its a stock market doing the work for us ... but why do you feel the need to punch down at this prosaic field "about nonlinear optimization"? The networks in question just want to learn, and to help us, if we train them to this end - and we make any and all excuses to avoid receiving this help, as our civilization quietly drowns in its own incompetency...
Did you read the full article? In science, you should usually have a very solid understanding of what the top minds in the field are fixated on as it allows you to try something different with confidence, and prevents you from pulling a Ramanujan, reinventing the exact same wheel. I can't think of a single scientist who caused a paradigm shift and didn't have an intimate understanding of the current status quo.
It is possible to use neural networks and still be on a quite different path than the mainstream.
Of course, there are a group of people defending the symbolic computation, e.g. see Gary Marcus, and always pushing back on connectionism (neural networks).
But this is somewhat a spectrum, or also rather sloppy terminology. Once you go away from symbolic computation, many things can be interpret as neural network. And there is also all the computational neuroscience, which also work with some variants of neural networks.
And there is the human brain, which demonstrates, that a neural network is capable of doing AGI. So why would you not want a neural network? But that does not say that you can do many things very different from mainstream.
I would've hoped he'd be exploring weirder alternatives off the beaten path. I mean, neural networks might not even be necessary for AGI, but no one at OpenAI is going to tell Carmack that.