I suspect the author doesnt realise one request with hardly anything returned is many hundreds if not thousands of "tokens". It adds up very fast. Just some debug effort on a nonsense demo learning project cost $5 in a couple of hours. For maybe a hundred or so requests.
That's straight up not true, unless that "demo learning project" is feeding GPT the entire Bible or something.
I have a project that uses davinci-003 (not even the cheaper ChatGPT API) like crazy and I don't come close to paying more than $30-120/month. With the ChatGPT API, it'll be 10x less...
It is not possible to pay anywhere close to $5 for a hundred requests, even if you used the max payload size every time.
Is it possible you had a bug that caused you to send far more requests than you were intending to send? Or maybe you used the older models which are 10x more expensive?
Could be I used an older API with the newer model. But there was no loop around the request only human input with mouse clicks from two people. Whatever was happening on the billing side there is zero chance Id ever post a project to HN for example.
I can understand making a mistake on the Internet, but to say it with such snarky gusto is inexcusable.
I’ve been playing with davinci pretty extensively and the only reason I’ve actually given OpenAI my credit card was because they won’t let you do any fine-tuning with their free trial credit, or something like that. You’re off by orders of magnitude, ESPECIALLY with the new 3.5 model.
Youre reading the snarky gusto in your head. My point was literally that small mistakes and even something operational scaled beyond extremely small user bases is not "cheap". If two humans clicking is five bucks in an afternoon. Regardless of how it happened. If id linked whatever I had done here Id be easily looking at 10k for people like you to assume bad faith. Its especially not cheap compared to using a smaller language model locally for anything but generation.
So 7 cents for dozens of requests is only about 1/10th what I was saying. So could be I have the old API, but even that 7 cents for 10s of requests is not cheap compared to executing a model yourself at scale.
You could have saved some money by writing tests. How much text were you sending at a time? I’ve been summarizing multiple 500 word chunks per query in my app as well as generating embeddings and haven’t broken $10 over the course of a couple weeks.
Sure but at some point you're testing prompt generation and what happens with the model, thats what Im talking about. This is a basic session with a couple of people clicking cost that much. So clearly Im doing something far more wrong from the API side to get whats clearly magically worse billing than everyone else here.
They charge per 1k tokens so you must have high volume somehow, are you maxing out the prompt length every time? That’s the only thing I can think of besides sending a ridiculous number of requests that would cost that much in an evening.