Very cool. I've been doing some "prompt design" myself, how did you land on the temperature 0.001?
It's kind of clunky to iterate on a prompt. If you change one thing about the prompt to get a better result for one aspect of the desired output, it can negatively impact the quality of another aspect.
e.g. say you were trying to get OpenAI to automatically write a recipe based on the contents of someones grocery list. Your prompt might be:
generate a recipe from the following list of ingredients: [eggs (12), milk (1 gallon), apples (8)]"
ChatGPT output:
Here is a simple recipe you can make using the ingredients you listed:
Apple Pancakes:
Ingredients:
12 eggs
1 gallon of milk
8 apples
If you wanted to make the ratios more accurate. You might add something like "Make sure the ratios of ingredients are correct.
ChatGPT output:
Here is a revised recipe that ensures the ratios of ingredients are correct:
Apple Pancakes:
Ingredients:
12 eggs
1 cup of milk
8 apples
Which is better, but many times something else gets worse (e.g. the list of intructions to actually cook the thing). Curious what process you followed when designing the prompts.
I think someone could build a business around "prompt design". Small tweaks make such a huge difference that a tool that helped you improve prompts would be really valuable.
>Very cool. I've been doing some "prompt design" myself, how did you land on the temperature 0.001?
The point is to use the least generative methods to parse the HTML.
> It's kind of clunky to iterate on a prompt. If you change one thing about the prompt to get a better result for one aspect of the desired output, it can negatively impact the quality of another aspect.
There is an implementation for testing prompts internally when you set it up on local. This way you can get an idea about which parts a prompt and restrictive hyperparameters are failing you.
With that being said, you can also design a unique custom parser by changing the prompt for different subsets of data.
>If you wanted to make the ratios more accurate. You might add something like "Make sure the ratios of ingredients are correct.
> Which is better, but many times something else gets worse (e.g. the list of intructions to actually cook the thing). Curious what process you followed when designing the prompts.
This is really interesting. I would love to hear more about it if you would like to open an issue under the repository with something like: "Using additional instructions to improve parsing of preset parsers", I may add a preset parser on that issue to showcase.
> I think someone could build a business around "prompt design". Small tweaks make such a huge difference that a tool that helped you improve prompts would be really valuable.
It's kind of clunky to iterate on a prompt. If you change one thing about the prompt to get a better result for one aspect of the desired output, it can negatively impact the quality of another aspect.
e.g. say you were trying to get OpenAI to automatically write a recipe based on the contents of someones grocery list. Your prompt might be:
ChatGPT output: If you wanted to make the ratios more accurate. You might add something like "Make sure the ratios of ingredients are correct.ChatGPT output:
Which is better, but many times something else gets worse (e.g. the list of intructions to actually cook the thing). Curious what process you followed when designing the prompts.I think someone could build a business around "prompt design". Small tweaks make such a huge difference that a tool that helped you improve prompts would be really valuable.