The AGI and ASI instances that we have (e.g. GPT-4), require entire data centers with highly specialized hardware to run. How would a rogue AI replicate itself? You are worried about a fictional threat scenario that doesn't map to the real world.
> The AGI and ASI instances that we have (e.g. GPT-4), require entire data centers with highly specialized hardware to run.
LLMs are likely not an end to the AI evolution and there's little reason to believe that AIs will always remain bound to huge data centers. We have a nice counter-example already - our pretty decent intelligence runs from about 1 Kg brain.
It's also naive to think of AIs as having to replicate themselves completely as we humans do. In some cases, it might not even make sense to talk about replication at all and simply about extending one instance of AI once CPU at a time. It's conceivable that AGIs will develop special built agents/worms for constrained offline/high latency devices. Your smartwatch is unlikely to run a whole AGI, but it could run a constrained, purpose built AGI's agent with limited intelligence, able to partially act independently and re-sync with mother AGI once connectivity is available.
> You are worried about a fictional threat scenario that doesn't map to the real world.
We're talking here about future, of course it's all speculation and "fiction" for now. GPT-4 was a complete "fiction" 5 years ago as well.