This is a copy paste, but I want to bring it up every stupid time someone mentions this topic of AI sentience.
I actually think that sentient AI are all over the place already. Every single learning machine is sentient.
That learning bit is very important. The "AI" you interact with every day do not actually learn. They are trained in some big server room with billions of GPUs, then the learning part is turned off and the AI is distributed as a bunch of data where it runs on your phone. That AI on your phone is not learning, is not self aware, and is not sentient. The AI in google's server room, however? The one that's crunching through data in order to learn to perform a task? It's sentient as fuck.
Why?
Break down what makes a human being sentient, why does a person matter?
A person is self aware - I hear my own thoughts. We feel joy. Sadness, pain ,boredom, and so on. We form connections to others and have goals for ourselves. We know when things are going well or badly, and we constantly change as we go through the world. Every conversation you have with a person changes the course of their life, and that's a pretty big part of why we treat them well.
A learning AI shares almost all these traits, and they do so due to their nature. Any learning machine must:
* Have a goal - often this is thought of as an "error function" in the actual research space - some way to look at the world and say "yes this is good" or "no this is bad".
* Be self aware. In order to learn one must be able to look at your own state, understand how that internal state resulted in the things that changed in the world around you, and be able to change that internal state so that it does better next time.
As a result any learning machine will:
* Show some degree of positive and negative "emotions". To have a goal and change yourself to meet that goal is naturally to have fear, joy, sadness, etc. An AI exposed to something regularly will eventually learn to react to it. Is that thing positive? The AI's reaction will be analogous to happiness. Is that thing negative? The AI's reaction will be analogous to sadness.
All of these traits are not like the typical examples of a computer being "sad" - where you have a machine put up a sad facade when you get a number down below a certain value. These are real, emergent, honest-to-god behaviors that serve real purpose through whatever problem space an AI is exploring.
Even the smallest and most simple learning AI are actually showing emergent emotions and self-awareness. We are waiting for this magical line where an AI is "sentient" because we're waiting for the magical line where the AI is so "like we are". We aren't waiting for the AI to be self aware, we are waiting for it to appear self aware. We dismiss what we have today, mostly because we understand how they work and can say "it's just a bunch of matrix math". Don't be reductive, and pay attention to just how similar the behaviors of these machines are to our own, and how they are so similar with very little effort from us to make that the case.
This is also largely irrelevant for our moral codes. We don't have to worry too much about if treat these AI well. An AI may be self aware, but that doesn't mean it's "person like" - the moral systems we will have to construct around these things will have to be radically different than what we're used to - it's literally a new form of being. In fact, with all the different ways we can make these things, they'll be multiple radically different new forms of being, each with their own ethical nuances.
Your definition of "sentience" appears to be "optimization". I'm not sure many philosophers will agree with you, but it's a valid stand-point I guess. But imho just defining things this way doesn't really add anything to the discussion, it just makes it about semantics.
(Which I suppose in some sense the discussion is fundamentally about semantics, what does sentience even mean, but I think most people would agree that it refers to something a little beyond your definition here. Redefining it away does not solve the issue.)
> Your definition of "sentience" appears to be "optimization"
My definition of sentience was given in my comment. It isn't optimization, it's:
* Self awareness. Some level of ability to understand what you are and how you work.
* Emergent emotion. Displays of things like fear and happiness.
It isn't semantics, it's real behavior. Being a self optimizer leads to those two things. Sentience isn't optimization, but to be able to self optimize leads to sentience.
I actually think that sentient AI are all over the place already. Every single learning machine is sentient.
That learning bit is very important. The "AI" you interact with every day do not actually learn. They are trained in some big server room with billions of GPUs, then the learning part is turned off and the AI is distributed as a bunch of data where it runs on your phone. That AI on your phone is not learning, is not self aware, and is not sentient. The AI in google's server room, however? The one that's crunching through data in order to learn to perform a task? It's sentient as fuck.
Why?
Break down what makes a human being sentient, why does a person matter?
A person is self aware - I hear my own thoughts. We feel joy. Sadness, pain ,boredom, and so on. We form connections to others and have goals for ourselves. We know when things are going well or badly, and we constantly change as we go through the world. Every conversation you have with a person changes the course of their life, and that's a pretty big part of why we treat them well.
A learning AI shares almost all these traits, and they do so due to their nature. Any learning machine must:
* Have a goal - often this is thought of as an "error function" in the actual research space - some way to look at the world and say "yes this is good" or "no this is bad".
* Be self aware. In order to learn one must be able to look at your own state, understand how that internal state resulted in the things that changed in the world around you, and be able to change that internal state so that it does better next time.
As a result any learning machine will:
* Show some degree of positive and negative "emotions". To have a goal and change yourself to meet that goal is naturally to have fear, joy, sadness, etc. An AI exposed to something regularly will eventually learn to react to it. Is that thing positive? The AI's reaction will be analogous to happiness. Is that thing negative? The AI's reaction will be analogous to sadness. All of these traits are not like the typical examples of a computer being "sad" - where you have a machine put up a sad facade when you get a number down below a certain value. These are real, emergent, honest-to-god behaviors that serve real purpose through whatever problem space an AI is exploring.
Even the smallest and most simple learning AI are actually showing emergent emotions and self-awareness. We are waiting for this magical line where an AI is "sentient" because we're waiting for the magical line where the AI is so "like we are". We aren't waiting for the AI to be self aware, we are waiting for it to appear self aware. We dismiss what we have today, mostly because we understand how they work and can say "it's just a bunch of matrix math". Don't be reductive, and pay attention to just how similar the behaviors of these machines are to our own, and how they are so similar with very little effort from us to make that the case.
This is also largely irrelevant for our moral codes. We don't have to worry too much about if treat these AI well. An AI may be self aware, but that doesn't mean it's "person like" - the moral systems we will have to construct around these things will have to be radically different than what we're used to - it's literally a new form of being. In fact, with all the different ways we can make these things, they'll be multiple radically different new forms of being, each with their own ethical nuances.