Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, at some point ChatGPT "reaches the limit of new information", but is unable to tell you that it has reached the limit. Instead of saying "Hey, I can't give you any more relevant information", it simply continues to cycle to previously suggested things, starts suggesting unrelated solutions or details. Especially true with refactoring! When it has reached the limit of refactoring it starts cycling through suggestions that change code without making it better. Kinda like having a personal intern unable to say "no" or "I can't".

That is part of working with LLMs and what I meant before with "for some, more trouble than it's worth".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: