Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This just happened to me this week.

I work on the platform everyone builds on top of. A change here can subtlety break any feature, no matter how distant.

AI just can't cope with this yet. So my team has been told that we are too slow.

Meanwhile, earlier this week we halted a roll out because if a bug introduced by AI, as it worked around a privacy feature by just allow listing the behavior it wanted, instead of changing the code to address to policy. It wasn't caught in review because the file that was changed didn't require my teams review (because we ship more slowly, they removed us as code owners for many files recently).





> It wasn't caught in review because the file that was changed didn't require my teams review (because we ship more slowly, they removed us as code owners for many files recently).

I've lost your fight, but won mine before, you can sell this as risk reduction to your boss. I've never seen eng win this argument on quality grounds. Quality is rarely something that can be understood by company leadership. But having a risk reduction team that moves a bit slower and protects the company from extreme exposures like this, is much harder to cut from the process. "Imagine the law suits missing something like this would cause." and "we don't move slower, we do more than the other teams, the code is more visible, but the elimination of mistakes that will be very expensive legally and reputationally is what we're the best at"


As it was foretold since the beginning, IA use is breaking security wantonly.

Fuck it - let them reap the consequences. Ideally wait until there's something particularly destructive, then do the post-mortem as publicly as possible - call out the structures and practises that enabled that commit to get into production.

Ouch, so painful to read.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: