I worked for a very early billpay company where you could pay your bills online to vendors, even if the vendor didn't support it. We used API's where we could, but where we couldn't...
We had a whole team dedicated to keeping up the changes vendors would make to their websites that we scraped for info. The team was called, of course, "Scrape and P(r)ay".
If you build your scraper to find data on the page based on the shape of the data itself instead of the structure of the page then it will be resilient to most changes that don't materially change what data is displayed on the page.
So, prefer regex over css selectors, and css selectors over xpath, where possible. And don't select based on nesting or position if possible.
Depends on your development and per-action cost. And on the possible latency. It also changes your whole stack from "send a request" to "emulate each step in a browser while taking screenshots at (hopefully) the right event/delay" - that's a huge difference.
True, but it is an API that they can't easily deprecate on a whim.