Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Given that LLMs aren't able to properly understand code, would it be feasible and useful to create AI honeypots?

For example, add some dead code that contains an obvious bug (like a buffer overflow). The scanbots catch it, submit the PR, get banned.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: