Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

By now I suppose they could use an LLM to change the "personality" of the training data, then train a new LLM with it ;)


Ugh.

A derivative.

We're in some ways already there. Not in terms of personality. But we're in a post-llm world. Training data contains some level of LLM generated material.

I guess its on the model creators to ensure their data is good. But it seems like we might have a situation where the training material degrades over time. I imagine it being like if you apply a lossy compression algorithm to the same item many times. IE resaving a JPEG as JPEG. You lose data every time and it eventually becomes shit.


Maybe we've just found a necessary condition of AGI: that you can apply it many times to a piece of data without degrading.


LLMs trained on LLM generated material trained on ... until just gray goo is left.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: