Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Were those trained using RLHF? IIRC the earliest models were just using SFT for instruction following.

Like the GP said, I think this is fundamentally a problem of training on human preference feedback. You end up with a model that produces things that cater to human preferences, which (necessarily?) includes the degenerate case of sycophancy.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: