Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think they mean prompt injection rather than some malformed image to trigger a security bug in the processing library


The LLM is the image processing library in this case so you are both right :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: