I wouldn't hold my breath, and anyway at this point NVIDIA has faster chips and more supported software all the way down the stack. My previous startup tried to solve some of these problems and we built what is as far as I know still the only reasonably complete device-portable deep learning framework. Today something like an RTX 3070 is a good budget option for small experiments and you can always lean on a cloud provider if you need more compute temporarily. Hard to beat a TPU pod when you're in a hurry.