Is this perhaps too focused on the individual which might not lead to the optimum for the zoomed-out / group (population-level) case?
This reminds me of a meta-system-transition where simplified versions of individual (previously independent) units can lead to a unified cooperative whole that works better at the meta-system level (similar to what led to eukaryotes or to multicellularity). It's like a sacrifice that the individuals make (of things that might have looked like intelligence in the previous context) in order to be able to work together better, and these kinds of transitions have worked out extremely well for life, so far! (I guess I am biased, being alive and everything!?)
Should humans resist the formation of human meta-systems? Human/robot meta-systems? Certainly seems like a good chance that an extremely successful meta-system could be dangerous to the rest of the regular humans!? But if it's actually better, then that's just normal (meta) evolution.
> Should humans resist the formation of human meta-systems? Human/robot meta-systems?
No. I'm saying humans should not lose abilities or features because of it. We should be able to live comfortable lives and still maintain peak physical and mental performance. Any changes in our nature should not go against that.
We specialize though. Some people might be able to maintain peak mental performance. Others peak physical. Some might be better at social things to make sure we have a cohesive well-functioning society. There's probably thousands of character traits and some are going to invariably be tradeoffs in our genome/social conditioning. There's a reason there aren't Einstein-level intelligences running around every day doing body building, modelling, race car driving all at once. You have to choose what you spend your time on and most people choose to intersect that with natural abilities and interests.
Also, assuming the conclusions of the article in any way prove a shift in intelligence (it doesn't), you'd still be looking at averages which tells you nothing about the effect it has on total number at population. Maybe there's an upper bound of "smart" people or "strong" people modern society demands. Certainly as a skill becomes more commoditized it becomes less valuable economically. Maybe we've got too many smart people now because a lot of people saw the explosion in value of such careers?
This reminds me of a meta-system-transition where simplified versions of individual (previously independent) units can lead to a unified cooperative whole that works better at the meta-system level (similar to what led to eukaryotes or to multicellularity). It's like a sacrifice that the individuals make (of things that might have looked like intelligence in the previous context) in order to be able to work together better, and these kinds of transitions have worked out extremely well for life, so far! (I guess I am biased, being alive and everything!?)
Should humans resist the formation of human meta-systems? Human/robot meta-systems? Certainly seems like a good chance that an extremely successful meta-system could be dangerous to the rest of the regular humans!? But if it's actually better, then that's just normal (meta) evolution.