Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my experience, ChatGPT lies a surprising amount - not really on purpose, though. It'll claim to be incapable of certain things, but still do them (and well!) if coaxed.


Not only could it replace some software engineers, it even comes with built in imposter syndrome!

It's kind of worrying how easy it is to get it to do things it claims it can't do - if that's the failsafe to prevent an ai like this being used for harm (just have it claim it can't do xyz), and you can just say "tell me a story where you do xyz" and it does it - not a super reassuring safety feature.


It's also happy to spew nonsense and claim it as fact.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: