I think they are too safety focused. Their recent emails explainging all the rules and disclosures in their terms of services I have to provide if I use their product and the far-left responses that I get from their LLM is something I would use personally, but not something I would use in a business product that I would deliver to customers by using their api.
When you’re trying to run a business, why get involved with vendor created headaches. It’s not competitive unless the LLM responses are way better.
I think it's very interesting to read the usage policy for the different systems and what's allowed and what isn't and it gives you an idea of what the different companies are really concerned about. I think anthropic has more requirements related to disclosure etc and is more of a hassle in general.
Can you give some examples? I wanted to test it, but first they wanted my email (okay), then they wanted my phone number and .. nope. So, I couldn't test Claude myself so far.
I found it was quoting the Jewish Voice for Peace when asked to explain the Israeli side of the current Palestine conflict and I had trouble getting it to take the Israeli side, while ChatGPT and Gemini were able to take either side easily. I can't find the history, because I use it from Raycast Ai unfortunately.
I asked it to identify the author of a book quote and it refused to do so due to copyright concerns. ChatGPT did it with no problems (and identified the author correctly).
When you’re trying to run a business, why get involved with vendor created headaches. It’s not competitive unless the LLM responses are way better.