We seem to operate on the assumption that sentience is "better," but I'm not sure that's something we can demonstrate anyway.
At some point, given sufficient training data, it's entirely possible that a model which "doesn't know what it's saying" and is "stringing words together using an expansive statistical model" will outperform a human at the vast, vast majority of tasks we need. AI that is better at 95% of the work done today, but struggles at the 5% that perhaps does truly require "sentience" is still a terrifying new reality.
In fact, it's approximately how humans use animals today. We're really great at a lot of things, but dogs can certainly smell better than we can. Turns out, we don't need to have the best nose on the planet to be the dominant species here.
If this is a reply to me, I think you missed the point I'm making here. I don't care if we can prove other people are sentient or not.
My point is that it may well not matter whether a thing is sentient or not if a well-trained algorithm can achieve the same or better results as something that we believe is sentient.
We seem to operate on the assumption that sentience is "better," but I'm not sure that's something we can demonstrate anyway.
At some point, given sufficient training data, it's entirely possible that a model which "doesn't know what it's saying" and is "stringing words together using an expansive statistical model" will outperform a human at the vast, vast majority of tasks we need. AI that is better at 95% of the work done today, but struggles at the 5% that perhaps does truly require "sentience" is still a terrifying new reality.
In fact, it's approximately how humans use animals today. We're really great at a lot of things, but dogs can certainly smell better than we can. Turns out, we don't need to have the best nose on the planet to be the dominant species here.