Hypothesizing that graphs could lead to AGI is tantamount to equating part of the neocortex to the whole body. A model does not make reality, especially when the two are designed to work in a permanent feedback tandem.
Since writing A Thousand Brains, Jeff Hawkins has revealed fascinating structures within the brain, a finite set of structure 'types' so to speak (families of similarly architectured brain parts).
Graphs are definitely part of the biological design, but in taking inspiration from nature to build our own beings, we should take notice that the real thing is vastly more complex, and investigate more exhaustively the ins and outs of real brain structures.
Your point about equating the neocortex to the whole body led me to write this out:
I don't think I have any basis for this besides a gut feeling and some daydreams, but I think each of the major methodologies like reinforcement learning, transformers, graph nns etc need to be combined into some larger type of ensemble and worked together into a cohesive system with feedback loops for online and offline learning for a shot at AGI.
I've been doing ML projects for like 6 years, mostly in NLP but have dipped into reinforcement learning because it interests me and my gut feeling has been much that there are a lot of complimentary learning systems that can handle different problems really well and cover for limitations in others and I'd like to see what happens if we smash them together towards the goal of generally beating baselines for as many benchmarks as possible.
This is exactly my personal intuition as well, almost to a T. Here's to the satisfying consilience of independent thinking reaching the same hypotheses.
I was reading through George Lakoff and Mark Johnson's Philosophy in the Flesh last night and had a very similar thought. Their model of embodied cognition is necessarily decentralized in a really interesting way.
Maybe be we have a second brain. Or may be some part of the brain is “irrational” and cannot be model. Or strange to say, even we find more and more pattern in the physical things and can link to what we perceive and act, there might be something else. And that something else is more important that others.
Or I just hope there is something there. Unique. Cannot be copied. It might not last or re-birth or incarnations… n fact it might be better that way.
Ok, continue the rationalist approach, great find no doubt. And in fact nn may not even a total rational but a data/probabilist model. But just hope it is incomplete.
Theoretically, maybe, but I believe the devil lies in the details: implementation is where "biological hardware" if you will is more akin to a thousand specific sets of TPUs whereas we're trying to brute force / shoehorn the whole processing into a suboptimal one-size-fits-all giant RNN. The inadequacies and inefficiencies of such a shoehorning of the model's execution might (I argue do) absolutely self-defeat the endgoal of a coherent adaptive machine. I'm tempted to humorously say #NotAllParts (need the same underlying hardware optimizations) ;-)
Shower thought that just came up: observe that while we're endlessly chasing a bigger-than-reality String Theory, actually working physics implemented in real-world machines follow the specialized approach of one partial but perfect theory for each category of problems. We build hybrid, because as far as we can tell, reality is variations, and the biological probably most of all. So in trying to build a being…
That is true, but unfortunately "out of the box", they're not well suited just be "fed into" an NN. Even if you think of the adjacency matrix as very similar to how the weights are laid out in a feed-forward neural network, you can't ignore that:
- in real life, graphs are not fixed
- you need to deal with the many different potential representations of the graph (permutation invariance)
- the nodes are usually containing more features than a single scalar value
but this is definitely not the best explanation, I think this guy does a lot better job: https://youtu.be/JtDgmmQ60x8
Sure, but GNNs modeling neurons is nonsensical since the graph is the analyte of the NN, you are not a priori doing anything with the graph. So in a sense my point is "to use NNs to model neurons", using a GNN doesn't buy you anything because the G in GNN isn't being subjected to dynamic activation.
Since writing A Thousand Brains, Jeff Hawkins has revealed fascinating structures within the brain, a finite set of structure 'types' so to speak (families of similarly architectured brain parts).
Graphs are definitely part of the biological design, but in taking inspiration from nature to build our own beings, we should take notice that the real thing is vastly more complex, and investigate more exhaustively the ins and outs of real brain structures.