Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here is an example of some prompt engineering in order to build augmentations for factual question-and-answer as well as building web applications:

https://github.com/williamcotton/transynthetical-engine



Problem with this is that it requires the software to know what the target is when question is asked and I don’t see it as reliable as there are many ways to ask and could have many targets


I don’t really understand your criticism but I’d be happy to continue a dialog to find out why you mean!

There’s probably a little too much going on with that project, including generating datasets for fine-tuning, which is the reason for comparing with a known answer.

It is very similar to the approach used by the Toolformer team.

But by teaching an agent to use a tool like Wikipedia or Duck Duck Go search it dramatically reduces factual errors, especially those related to exact numbers.

Here’s a more general overview of the approach:

From Prompt Alchemy to Prompt Engineering: An Introduction to Analytic Augmentation

https://github.com/williamcotton/empirical-philosophy/blob/m...


Nice idea, why did you choose to build it in typescript/node out of interest?


Because I wanted to run it in the browser and have a document object in context of the LLM response being eval’d!

Also, having the exemplars typed has saved me from sending broken few-shots many times!

That is, I keep all of the few-shot exemplars in TypeScript and then compile them into an array of system/user/assistant message strings at some point before making any calls to an LLM.


Nice, although you did that so you can avoid having an API when running some web application you make for yourself or am I misunderstanding you sorry?

Because the other distribution paradigms are... - sharing your key with the user on client side is risky, so you have API side requests. - one day LLMs might be local and can then run off-browser


The approach I’ve been using is to keep the API requests server-side and to expose a client interface, thus keeping the keys safe, but the response is eval’d client-side so when OpenAI starts referencing document.body in a completion it affects the browser runtime directly.


Yeah it's a smart idea I see now, you can use it as a universal database as such for all clients, like having a Python dict for all the outputs or something, but you can also easily spin up the UIs enabling your cool examples.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: