You're definitely correct that GAI is not necessarily going to be human-modeled. Who knows how that's going to happen if it does. However, I'm betting that it will be, or at least heavily inspired by it. I think you are underestimating biomimetics. For example, convolutional neural layers and residual neural networks were both based on cat/primate visual cortex systems. And both were huge advances in computer vision.
The most "advanced" GI we know of right now is human intelligence, and I think it would be wasteful not to extract as much as we can from it. You mention octopus brains, but I would argue that for GAI we're more concerned about what humans are good at with their oversized prefrontal cortex, which is higher-level, abstract, executive cognition. Pulling color information out of grascale doesn't sound all that impressive to me (especially since that sounds like an easy unsupervised deep learning task) compared to being able type all this out.
You mention the limitations of evolution. I think the key limitation of evolution is that it is largely append-only. Our brain is like a nested doll where the deeper you go, the further back you go in evolutionary history. The way evolution plays out means that things are always built on top, there is never really a large overhaul, since 99.99999...% that just means the zygote isn't even viable. Do we really care about the lower levels beneath higher executive function? Do we really care about modeling emotions (e.g. fear/flight) like the limbic system, or modeling how the brain stem maintains homeostasis such as monitoring carbonic acid levels in blood? Perhaps. But I think the prefrontal cortex is what is most interesting. But I guess it depends on what you want from a GAI.
We probably have similar views, with some differences.
We definitely have and will pick up inspiration from the brain as you point out.
That is especially true in terms of the basic topologies of our brains neural networks. I.e. the painfully simplified artificial neuron model, two layer networks (which may roughly corrrespind to some single biological neurons that can have hierarchies of dendrites as I understand), then deep networks, convolution, recurrence, competitive layers and other topologies.
But I think the freedom of math to produce global algorithms directly allows faster innovation in a way that incremental change via evolution never had.
I expect GAI will always have some biologically inspired roots, but my untested (obviously!) opinion is that the first GAI will significantly benefit from global gradient based algorithms, and incorporate symbolic and database sides too.
The latter two not looking anything like how our neurons operate.
The most "advanced" GI we know of right now is human intelligence, and I think it would be wasteful not to extract as much as we can from it. You mention octopus brains, but I would argue that for GAI we're more concerned about what humans are good at with their oversized prefrontal cortex, which is higher-level, abstract, executive cognition. Pulling color information out of grascale doesn't sound all that impressive to me (especially since that sounds like an easy unsupervised deep learning task) compared to being able type all this out.
You mention the limitations of evolution. I think the key limitation of evolution is that it is largely append-only. Our brain is like a nested doll where the deeper you go, the further back you go in evolutionary history. The way evolution plays out means that things are always built on top, there is never really a large overhaul, since 99.99999...% that just means the zygote isn't even viable. Do we really care about the lower levels beneath higher executive function? Do we really care about modeling emotions (e.g. fear/flight) like the limbic system, or modeling how the brain stem maintains homeostasis such as monitoring carbonic acid levels in blood? Perhaps. But I think the prefrontal cortex is what is most interesting. But I guess it depends on what you want from a GAI.