Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think so. It might improve a little but not to the point of making it unnecessary.

The problem is not the LLM, the problem on the other side of the keyboard. No matter how good they are, LLMs can't read minds, they can only guess from what you write, and they won't be able to help you unless you give enough information to express the problem. And it is not an issue specific to LLMs, we already do "prompt engineering" when we are talking to humans. We don't call it that, but it is the same idea: write your message carefully to make sure the person on the other side understands your request.



Maybe GPTs will start asking clarifying questions before spitting out answers? Or it falls in GAI definition?


Maybe, but there is a balance to be found, if the AI asked you for perfect clarity, it would be more like a programming language than a natural language model.

The whole point of GPTs is that they are able to guess what you need based on incomplete prompts and how people usually react to these incomplete prompts. Prompt engineering is the art of telling just enough information for the AI to understand the specificity of your request, but still let it fill in the blanks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: