Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If we are taking wikipedia as ground truth, the next line is:

An overfitted model is a statistical model that contains more parameters than can be justified by the data.

Another definition from https://www.ibm.com/cloud/learn/overfitting#:~:text=Overfitt...:

Overfitting is a concept in data science, which occurs when a statistical model fits exactly against its training data.

We can argue over the precise definition of overfitting, but when you fitting a high-capacity model exactly to the training data, that is a procedural question and I would argue falls under the overfitting umbrella.



I'm going to agree this isn't over fitting. And it's dangerous to imply over fitting is ever a good thing.

If your objective function is to ID one type of Batman, you just have a very specific objective function which could also suffer from over fitting (i.e. it's unable to perform well against out of sample Batman of the same type).

The reason I push back is that many healthcare models are micromodels by your definition: they may only seek to perform well on images from one machine at a single provider location on a single outcome, but they also have over fitting issues from time to time since the training data isn't diverse enough for the hyper specific objective.


It is important to note that these micro-models are only supposed to be used in the annotation process. During annotation there is a separate process for QA where there will be some form of human supervision. Micro-models are NOT supposed to be used for production environments.

100% agree on the healthcare front, which actually perfectly underlies the point here. These models are overfit to one specific modality but often used for generic purposes. One reason why it is important to define micro-models is to point them out when they are deployed in a live production environment, which I agree is very dangerous. Many healthcare models are truly not ready for live diagnostic settings. On the other hand, these same models often do perform well on assisting the actual annotation of new data when applied to the right domain and with appropriate human supervision. This is a perfect encapsulation of the distinction we are trying to make.


You're missing my point. You can make a good micro-model that is very, very targeted and does not overfit. Sure, it won't generalize outside your target, but you know you can use it at clinic X with machine Y to predict Z.

This is why I'm saying you aren't describing overfitting but instead having a very, very specific objective function.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: