In "real world" you don't use OpenAI or Anthropic API directly—you are forced to use AWS, GCP, or Azure. Each of these has its own service for running LLMs, which is conceptually the same as using OpenAI or Anthropic API directly, but with much worse DX. For AWS it's called Bedrock, for GCP—Vertex, and for Azure it's AI Foundry I believe. They also may offer complementary features like prompt management, evals, etc, but from what I've seen so far it's all crap.
Well, that clears that up.