> The filtering and interpolation mechanisms of the brain are error prone and the brain often does not know when it is wrong. You may see something and never know it wasn't real, or don't see something and never know it was there.
But it's hard to say necessarily that it's not optimal -- that is, that a digital version could necessarily do better. We are biased by our priors, just as a digital pattern matcher is biased by its training set. And in low light, we try to find patterns that we recognize in the noise, reconstruct the missing parts, etc., just as a computer would.
A digital system might eventually have better resolution and be able to do better on some scale of precision and performance, but I suspect that most attempts to improve performance by introducing "supersampling" and pattern matching and other forms of inference will always result in similar errors to the brain. Perhaps different in character due to differences in the training set and algorithms, but of a similar nature.
But it's hard to say necessarily that it's not optimal -- that is, that a digital version could necessarily do better. We are biased by our priors, just as a digital pattern matcher is biased by its training set. And in low light, we try to find patterns that we recognize in the noise, reconstruct the missing parts, etc., just as a computer would.
A digital system might eventually have better resolution and be able to do better on some scale of precision and performance, but I suspect that most attempts to improve performance by introducing "supersampling" and pattern matching and other forms of inference will always result in similar errors to the brain. Perhaps different in character due to differences in the training set and algorithms, but of a similar nature.