> despite the fact that the underlying model is the two-year old GPT-3.
This is incorrect, it's not a 2 year old model, it's the latest updated model they're calling GPT 3.5, which I believe has an order of magnitude higher number of parameters.
Also, the reason there wasn't an explosion of AI uses for OpenAI's products versus something like Stable Diffusion is that OpenAI costs money and isn't extensible, while SD is. Innovation comes from such extensibility, and in a way, this just shows how right Stallman was.
> ... while SD is. Innovation comes from such extensibility, and in a way, this just shows how right Stallman was.
The cat is out of the bag and it's not going back in. We'll have totally free model in no time, just like StableDiffusion. These models may take money to train but either a philanthropist billionaire like emostaque pays for the training or several people team up to fund the training.
If we've got a closed OpenAI (funny name for something closed and for pay right/) today, we'll have an open one tomorrow.
This is incorrect, it's not a 2 year old model, it's the latest updated model they're calling GPT 3.5, which I believe has an order of magnitude higher number of parameters.
Also, the reason there wasn't an explosion of AI uses for OpenAI's products versus something like Stable Diffusion is that OpenAI costs money and isn't extensible, while SD is. Innovation comes from such extensibility, and in a way, this just shows how right Stallman was.