Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is ensuring that the model is not over trained.

They also showed that when they fed in simulated sparse measurements based on real full images of generic things, they got back fuzzy versions of the real image. [1] So if you put in a sparsely captured elephant (if for instance there was one at the center of the galaxy) you'd get an image of the elephant out, not this black hole.

To complete the artist analogy, imagine that the suspect that is being drawn by each artist is some stereotypical American. The description given to the artists doesnt say that, it just describes how the person looks. One of the three sketch artists is American and the others are Chinese and Ethiopian.

If the American draws a stereotypical American, how can you be sure that the drawing is accurate and thats not just what he assumed the person would look like because everyone he has ever seen looks like that?

You look at what the other two draw. If they both draw the same stereotypical American, even though they have no knowledge of what a stereotypical American looks like, you can be pretty sure that they determined that based on the description provided to them. The actual data.

They did still likely utilize some of their knowledge about what humans in general look like though. This is analogous to how the model uses its training on what a generic image looks like. For instance, maybe several sparse pixels of the same value are likely to have pixels of that same value between them. The model puts things like this together and spits out a picture of what we think a black hole looks like even though its never seen a black hole before.

[1] https://youtu.be/BIvezCVcsYs?t=685



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: