Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can’t stop thinking about this exact notion. The main reason we don’t always use stuff like TLA+ to spec out our software is because it’s tedious AF for anything smaller than, like, mission-critical enterprise-grade systems and we can generally trust the humans to get the details right eventually through carrot-and-stick incentive systems. LLM agents have none of the attentional and motivational constraints of humans so there’s no reason not to do things the right way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: