Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

... If this happens, the next hacks will be context poisoning. A whole cottage industry will pop around preserving and restoring context.

Sounds miserable.

Also, LLMs don't learn. :)



LLMs themselves don’t learn but AI systems based around LLMs can absolutely learn! Not on their own but as part of a broader system: RLHF leveraging LoRAs that get re-incorporated as model fine tunings regularly, natural language processing for context aggregation, creative use of context retrieval with embeddings databases updated in real time, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: