Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why do LLMs still not run code before giving it to you?
1 point by highfrequency 4 months ago | hide | past | favorite | 3 comments
The leading models all advertise tool use including code execution. So why is it still common to receive a short Python script containing a logical bug which would be immediately discoverable upon running a Python interpreter for 0.1 seconds? Is it a safety concern / difficulty sandboxing in a VM? Surely not a resource consumption issue given the price of a single CPU core vs. GPU.


Is it a common use case to produce a standalone program that could be tested in isolation? Usually I'm asking for a function (or just a few lines of change) that depends on the rest of my code & environment, so it's not trivial to test.


depends on the methodology really.

if you're doing TDD style work but with an AI it's not uncommon to one-shot a function and then throw it against your battery of tests.

it's also pretty doable if you're writing smallish scripts or trying to follow functional coding paradigms; with functional stuff it's often easy to pick apart the specific modules for testing against criteria.


Sounds like an opportunity for you to make the world better by designing the process and implementing it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: