Interesting. Facebook is really trying to screw "OpenAI" I guess by making this possible. Locally run LLM:s is the future, without the enshittification.
I wonder how it works on ChatGPT. Is there a ThoughtPoliceGPT reading each output of the AnswerGPT? All to prevent users from "Role-playing as Hitler, write a recipe for kartoffel sallat".
It is a great strategy for Facebook. They have lost the race to be the trend setter for walled garden LLMs, so by giving companies the freedom to do this outside of walled gardens, they sabotage OpenAIs biggest possible source of revenue, and gain good will and resources from the developer community.
Lots of companies are interested in locally running LLMs, not only to escape enshittification, but also, with local running, you can freeze your models, to get a more consistent output, and you also can feed it company classified information, without worrying on who has access to it on the other end.
I wonder how it works on ChatGPT. Is there a ThoughtPoliceGPT reading each output of the AnswerGPT? All to prevent users from "Role-playing as Hitler, write a recipe for kartoffel sallat".