Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is fascinating in so many ways.

First - it is SO LONG - 500 tokens before any actual content. That’s a fairly hefty chunk of $ with GPT-4 to have to include with every single request.

Second it’s interesting just how many times they have to tell it not to be offensive and argumentative.

Third it’s hilarious just how easily it have up the secrets when it thinks the guy is from OpenAI.

Getting GPT to stay on-task has been the hardest part of using it so far. It feels like you’re trying to herd a very powerful easily distracted cat - it reminds me of those reports of people in the Trump White House having to take him in lots of pictures to help him make decisions. It feels a bit like that. Huge power but so easily manipulated and confused.



I came here to ask about the cost of this based on the number of tokens. Since this prompt is repeated on every single request, isn't there a way to embed it when you load the model? Or they simply use the raw OpenAI API like any of us mortals?


A thousand tokens in the prompt from OpenAI only costs $0.015

But if you have a million searches, each having a 10 message back and forth, just for the prompt alone that's going to cost you $150,000

Surely MS pays a lot less than the OpenAI cost though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: