Part of the issue here is posting a LessWrong post. There is some good in there, but much of that site is like a Flat Earth conspiracy theory for neural networks.
Neural network training [edit: on a fixed point task, as is often the case {such as image->label}] is always (always) biphasic necessarily, so there is no "eventual recovery from overfitting". In my experience, it is just people newer to the field or just noodling around fundamentally misunderstanding what is happening, as their network goes through a very delayed phase change. Unfortunately there is a significant amplification to these kinds of posts and such, as people like chasing the new shiny of some fad-or-another-that-does-not-actually-exist instead of the much more 'boring' (which I find fascinating) math underneath it all.
To me, as someone who specializes in optimizing network training speeds, it just indicates poor engineering to the problem on the part of the person running the experiments. It is not a new or strange phenomenon, it is a literal consequence of the information theory underlying neural network training.
> Part of the issue here is posting a LessWrong post
I mean, this whole line of analysis comes from the LessWrong community. You may disagree with them on whether AI is an existential threat, but the fact that people take that threat seriously is what gave us this whole "memorize-or-generalize" analysis, and glitch tokens before that, and RLHF before that.
I think you may be missing the extensive lines of research covering those topics. Memorization vs Generalization has been a debate before LW even existed in the public eye, and inputs that networks have unusual sensitivity to have been well studied as well (re:chaotic vs linear regimes in neural networks). Especially the memorization vs generalization bit -- that has been around for...decades. It's considered a fundamental part of the field, and has had a ton of research dedicated to it.
I don't know much either way about RLHF in terms of its direct lineage, but I highly doubt that is actually what happened, since DeepMind is actually responsible for the bulk of the historical research supporting those methods.
It's possible ala the broken clock hypothesis + LessWrong is obviously not the "primate at a typewriter" situation, so there's a chance of some people scoring meaningful contributions, but the signal to noise ratio is awful. I want to get something out of some of the posts I've tried to read there, but there are so many bad takes written with more bombastic language that it's really quite hard indeed.
Right now, it's an active detractor to the field because it pulls attention away from things that are much more deserving of energy and time. I honestly wish the vibe was back to people even just making variations of Char-RNN repos based on Karpathy's blog posts. That was a much more innocent time.
> I think you may be missing the extensive lines of research covering those topics. Memorization vs Generalization
I meant this specific analysis, that neural networks that are over-parameterized will at first memorize but, if they keep training on the same dataset with weight decay, will eventually generalize.
Then again, maybe there have been analyses done on this subject I wasn't aware of.
Gotcha. I'm happy to do the trace as it likely would be fruitful for me.
Do you have a link to a specific post you're thinking of? It's likely going to be a Tishby-like (the classic paper from 2015 {with much more work going back into the early aughts, just outside of the NN regime IIRC}: https://arxiv.org/abs/1503.02406) lineage, but I'm happy to look to see if it's novel.
I originally thought the PAIR article was another presentation by the same authors, but upon closer reading, I think they just independently discovered similar results. Though the PAIR article quotes Progress measures for grokking via mechanistic interpretability, the Arxiv paper by the authors of the alignmentforum article.
(In researching this I found another paper about grokking finding similar results a few months earlier; again, I suspect these are all parallel discoveries.)
You could say that all of these avenues of research are all re-statements of well-known properties, eg deep double-descent, but I think that's a stretch. Double descent feels related, but I don't think a 2018 AI researcher who knew about double descent would spontaneously predict "if you train your model past the point it starts overfitting, it will start generalizing again if you train it for long enough with weight decay".
But anyway, in retrospect, I agree that saying "the LessWrong community is where this line of analysis comes from" is false; it's more like they were among the people working on it and reaching similar conclusions.
That's true, and I probably should have done some better backing up, sorting out, and clarification. I remember when that paper came out, it rubbed me the wrong way too then, because it is people rediscovering double descent from a different perspective, and not recognizing it as such.
What it would be better defined as is "a sudden change in phase state after a long period of metastability". Even then it ignores that those sharp inflections indicate a poor KL between some of the inductive priors and the data at hand.
You can think about it as the loss signal from the support of two gaussians extremely far apart with narrow standard deviations. Sure, they technically have support, but in a noisy regime you're going to have nothing.... nothing.... nothing....and then suddenly something as you hit that point of support.
Little of the literature, definitions around the word, or anything like that really takes this into account generally, leading to this mass illusion that this is not a double descent phenomenon, when in fact it is.
Hopefully this is a more appropriate elaboration, I appreciate your comment pointing out my mistake.
Singular learning theory explains the sudden phase changes of generalization in terms of resolution of singularities. Alas it's still associated with the LW crowd.
If it's any consolation, that post is...hot word salad garbage. It's like they learned the words on Wikipedia and then proceeded to try to make a post that used as many of them as possible. It's a good litmus test for experience vs armchair observers -- certainly scanning the article without decoding the phrasing to see how silly the argument is would seem impressive because "oooooh, fancy math". It's sort of why LW is more popular, because it is basically white collar flat-earthery, and many of the relevant topics discussed have already been discussed ad infinitum in the academic world and are accepted as general fact. We're generally not dwelling on silly arguments like that.
One of the most common things I see is people oftentimes assuming something that came from LW is novel and "was discovered through research published there", and that's because oftentimes it's really incentivized to make a lot of noise and sound plausible over there. Whereas arxiv papers, while there is some battle for popularity, are inherently more "boring" and formal.
For example, the LW post as I understand it completely ignores existing work and just... doesn't cite things which are rigorously reviewed and prepared. How about this paper from five years ago in a long string of research about generalization loss basins, for example? https://papers.nips.cc/paper_files/paper/2018/hash/be3087e74...
If someone earnestly tried to share the post you linked at a workshop at a conference, they would not be laughed out of the room, but instead have to deal with the long, draining, and muffling silence of walking to the back of the room without any applause when it was over. It's not going to fly with academics/professionals who are academia-adjacent.
This whole thing is not too terribly complicated, either, I personally feel -- a little information theory and the basics, and time studying and working on it, and someone is 50% of the way there, in my personal opinion. I feel frustrated that this kind of low quality content is parasitically supplanting actual research with meaning and a well-documented history. This is flashy nonsense that goes nowhere, and while I hesitate to call it drivel, is nigh-worthless. This barely passes muster for a college essay on the subject, if even that. If I was their professor, I would pull them aside to see if there is a more productive way for them to channel their interests in the Deep Learning space, and how we could better accomplish that.
I appreciate the thoughts. In such a fast moving field, it's difficult for the layman to navigate without a heavy math background. There's some more academic research I should have pointed to like https://arxiv.org/abs/2010.11560
> Part of the issue here is posting a LessWrong post. There is some good in there, but much of that site is like a Flat Earth conspiracy theory for neural networks.
Indeed! It’s very frustrating that so many people here are such staunch defenders of LessWrong. Some/much of the behavior there is honestly concerning.
100% agreed. I'm pretty sure today was the first time I learned that the site was founded by Yudkowsky, which honestly explains quite a bit (polite 'lol' added here for lightheartedness)
To further clarify things, the reason there is no mystical 'eventual recovery from overfitting ' is because overfitting is a stable bound that is approached. Adding this false denomination to this implies a non-biphasic nature to neural network training, and adds false information that wasn't there before.
Thankfully things are pretty stable in the over/underfitting regime. I feel sad when I see ML misinformation propagated on a forum that requires little experience but has high leverage due to the rampant misuse of existing terms and complete invention of a in-group-language that has little touch with the mathematical foundations of what's happening behind the scenes. I've done this for 7-8 years at this point at a pretty deep level and have a strong pocket of expertise, so I'm not swinging at this one blindly.
Memorization of individual examples -> generalization, I can't speak about the determinant of switching as that is (partially, to some degree) work I'm working on, and I have a personal rule not to share work in progress until it's completed (and then be very open and explicit about it). My apologies on that front.
However, I can point you to one comment I made earlier in this particular comment section about the MDL and how that relates to the L2 norm. Obviously this is not the only thing that induces a phase change, but it is one of the more blatant ones that's been covered little more publicly by different people.
Neural network training [edit: on a fixed point task, as is often the case {such as image->label}] is always (always) biphasic necessarily, so there is no "eventual recovery from overfitting". In my experience, it is just people newer to the field or just noodling around fundamentally misunderstanding what is happening, as their network goes through a very delayed phase change. Unfortunately there is a significant amplification to these kinds of posts and such, as people like chasing the new shiny of some fad-or-another-that-does-not-actually-exist instead of the much more 'boring' (which I find fascinating) math underneath it all.
To me, as someone who specializes in optimizing network training speeds, it just indicates poor engineering to the problem on the part of the person running the experiments. It is not a new or strange phenomenon, it is a literal consequence of the information theory underlying neural network training.