I too am in awe of the audio engineering challenges and opportunities here.
But I don't necessarily know that Meet is trying to tackle all this? Are they using the mics as a microphone array & processing signals across phases? Could be missing it but I don't see that they said so. Perhaps they're just picking the loudest mic for a given speaker? Or any of a dozen other simpler tactics?
The current baseline is to manually mute and unmute microphones. So picking the best microphone sounds like a better idea already. If other people make a sound, I think it would be acceptable of that sound was missed/softened.
But I don't necessarily know that Meet is trying to tackle all this? Are they using the mics as a microphone array & processing signals across phases? Could be missing it but I don't see that they said so. Perhaps they're just picking the loudest mic for a given speaker? Or any of a dozen other simpler tactics?