These kinds of comments make me think few people have actually tried. My experience has been 1 work day of getting things set up to work the same as before for training and testing (PyTorch).
You have to consider that the average person who tried to do machine learning on AMD GPUs got burned in the past decade and has no reason to change their opinion. Also in the past it was much harder to get access to cutting edge GPUs from AMD. The fact that AMD drops GPU support for ROCm quickly also earns them scorn. I don't think it is an unfair assessment. They earned their reputation.
ROCm has improved a lot. And you can rent mi300x in the cloud now. So if you have a program that runs on Nvidia GPUs, it takes no time to test it on a cloud mi300x. If it works you can use it and save some money in the process.
Making AMD work effortlessly with pytorch et al should make the switch transparent.