Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was just chatting with ChatGPT about unlimited context length, and even if you theoretically could archive to have a personal assistant this way, one which would know all your chat history, an unlimited context length doesn't seem efficient enough.

It would make more sense to create a new context every day and integrate it into the model at night. Or a every day a new context of the aggregated last several days. Giving it time to sleep on it every day and it being capable to use it the next day without it needing to get passed in the context again.



If we can keep unlimited memory, but use only a selected relevant subset in each chat session. This should help. Of course the key is 'selected', it's another big problem. Like short memory. Probably we can make summaries from different perspectives on idle or 'sleep' time. Training into model is very expensive, can be done only from time to time. Better to add only the most important, or most used fragments. It likely impossible to do on mobile robot, sort of 'thin agent'. If done on supercomputer we can aggregate new knowledge collected by all agents. Then push new model back to them. All this is sort of engineering approach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: