All these words, “sensory input, emotional responses, social interactions, and the unique context of each moment” are all words that we’ve developed and yet have no full understanding of. In any philosophy paper they’d be challenged in a second.
> Sensory input refers to the information received by the body's senses, like sight, hearing, touch, taste, and smell, through sensory organs like the eyes, ears, skin, tongue, and nose, which is then transmitted to the brain as electrical signals for processing and interpretation; essentially, it's anything you perceive using your senses.
Even if we are talking about the best cameras in the world, they pale in comparison to our eyes. To say nothing of touch, taste, and smell. Advances here look to be far-off.
At the end of the day, a brain also processes information completely differently than LLMs. Anyone who says otherwise is both uneducated medically and thinks laughingly little of themselves.
Let's say we have an AI which, through peripheral devices, can attain human-level sensory processing. Is it human yet? Can it understand mortality? How about morality? Does it experience pain? Is that something we want to build?
You’re looking at the world from a very anthropocentric pov. Sight, sound, touch, feel, taste are all human senses, but they’re all just one thing: ingesting information. An AI can ingest information… that’s just a fact… so… what are we talking about here?
Also, we have absolutely no idea how the brain works. Current AI was developed off of modern theories on how the brain works. Saying that AI doesn’t represent how the brain works is ridiculous because the whole story of AI was that we developed a theory of how the brain worked, modeled it through tech, and it worked way better than we thought it would. Shit there was a whole article here about how AI resembles Kant’s theory of the mind. Like I just don’t know how you can be so confident here.
While there are some similarities, yes, our brains definitely don't have anything akin to backpropagation, which is the critical mechanism for how current AI models learn.
Hinton has some research on a forward-forward learning paradigm [1], which might be closer to how our brains learn (but the artificial implementations are not great yet). He also posits that maybe the purpose of humans' dreams are generating negative data for such a contrastive forward-forward learning mechanism.
Kant didn't say that the senses were ingesting information. Rather, he said the categories of the mind made sense of the sensory manifold. The categories gave structure to the chaos.The give rise to the phenomenal world we experience. They are not the thing in itself, whatever the world is.
You're assuming the sensory organs passively take in information, instead of creating it from all the noise in the world. That the world feeds us information the way we feed generative models. But humans have already taken the trouble to create the information in language, images, video.
> You’re looking at the world from a very anthropocentric pov. Sight, sound, touch, feel, taste are all human senses, but they’re all just one thing: ingesting information. An AI can ingest information… that’s just a fact… so… what are we talking about here?
I think my comment was misunderstood, so let me try to break it down a little. Let's remember that this was in the context of "there's nothing about AI in general that limits it to learning only from prior data":
- Senses are used to ingest information, and processors process that information into usable data. The density of the information ingested, and the speed at which it is processed, and the nature of how processing occurs, is vastly different. To further break it down: I'm stating that we don't yet have sensors anywhere near as capable as humans, and that even if we did, without a human brain to process the data, you will receive a different output. Again, see photography for more on this. And we have not even begun to scratch the surface of touch or taste. I understand the touch issue is (one small part) of why general purpose personal robots are not yet viable. I argue that we are a LONG way off from computers being able to interpret the world in a similar fashion to humans.
I believe our sensory capacity is a large (but not complete) part of what it means to be a living animal.
- Emotion still appears to be exclusive to living things, not machines. It's unclear what is necessary for this to change this. This is a limiting factor to computers being able to understand the world, "social interactions, and the unique context of each moment," which was the claim in question.
- As far as I'm aware, no LLM today exhibits true reasoning or morality. While LLMs are certainly impressive in their ability to recall information from compressed data, and even generate streams of text that look like reasoning, they are still simply decompressing stored data. Morality today is implemented as content filters and fine-tuning of this statistical model.
> Also, we have absolutely no idea how the brain works. Current AI was developed off of modern theories on how the brain works. Saying that AI doesn’t represent how the brain works is ridiculous because the whole story of AI was that we developed a theory of how the brain worked, modeled it through tech, and it worked way better than we thought it would.
It makes me really sad when people say this, because it's incredibly disingenuous. There are certainly more questions than answers when it comes to the brain, but we _do_ understand quite a lot. It's not surprising to me that people who are focused on technology and AI would anthropomorphize machines, and then claim that (because they aren't aware of how the brain works) "we don't know how the brain works." I had similar beliefs, as a software engineer. But, after watching my partner attend medical school and residency, it's become clear that my own knowledge is far from the sum of humanity's knowledge in this area.
You're absolutely right that LLMs borrow concepts from neuroscience, but they are still a VERY long way from "recreating the brain." I genuinely find it sad that people think they are no smarter / better than an LLM. Keep in mind no LLM has even passed a Turing test yet. (No, I'm not talking about the Facebook comments section - I'm talking about a test where someone knowingly communicates with a machine and a human through text, and through targeted questions and analysis of the answers, is unable to accurately determine which is which.)
Here's some more food for thought: Can LLMs sleep? Can they dream? What does that look like? Can they form opinions? Can they form meaningful, fulfilling (to themselves) relationships?
> Although advanced AI systems can “learn” through processes such as machine learning, this sort of training is fundamentally different from the developmental growth of human intelligence, which is shaped by embodied experiences, including sensory input, emotional responses, social interactions, and the unique context of each moment. These elements shape and form individuals within their personal history. In contrast, AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge.
So a lot here that I disagree with. You start out by pointing out how much more information humans ingest, but there's no reason why amount of information ingested leads to a fundamentally different organism. In the exact same way, I eat a lot more food than an amoeba, but we're still both animals. Scale doesn't make a difference.
The idea that human emotions are somehow unique from thoughts needs to be proven to me. IMO emotions are just thoughts that happen too quick for language. This discretization of the human experience is unnecessary, and like I said before it would be immediately challenged in a philosophy setting. So would your claim that humans exhibit some kind of reasoning or morality that’s distinct and unique. Modern philosophy is quite clear that this is bullshit. I was just reading Nietzsche today and I can feel him rolling over in his grave right now.
Also, the base of machine learning centers around simulating emotions: if the AI does something good, it's rewarded. If it does something bad, it's punished. We created the whole algorithm by simulating Freud's pleasure principle and who are we to say that the simulation is any different from the real thing?
> It's not surprising to me that people who are focused on technology and AI would anthropomorphize machines, and then claim that (because they aren't aware of how the brain works) "we don't know how the brain works." I had similar beliefs, as a software engineer.
Well I'm actually much more knowledgeable about humanities than I am about tech, and IMO the tech world is at the forefront of making our abstract philosophical understanding of the brain concrete. Neural networks and LLMs are the most successful method of creating cognition. I'm sure we'll find that there's a lot more to do, but this could very well be the fundamental algorithm of the brain, and I don't see any reason to discount that by saying what you've been saying in this comment thread.
I don't remember stating that having 5 senses were necessary to be human. This reads like a very uncharitable dismissal of what's really a very interesting topic.
Helen Keller, despite lacking sight and hearing, was still able to perceive the world through sensory input, including taste, touch, and smell - and although she could not hear, she could still feel warmth and the touch of another human, and experienced emotions. (She may not be the best example for your argument, either, as she was born with sight and hearing.)
You asked 'at what point will be it be considered human with added inputs' so I asked the reverse question. It is no more or less charitable to ask 'when does one stop being human with fewer inputs' than to ask 'when does one become human as inputs are added'.
I see - that's not quite what I was asking. Rather, I asked if the parent believed AI would get a physical body, with all that implies.
> Do you think AI will soon get a physical body, and experience "sensory input, emotional responses, social interactions, and the unique context of each moment"?
In fact, my point was that it's not clear that all of these features simply "added inputs." (Hence my questions around emotions, pain, mortality, and morality.)
Absolutely! It certainly depends on the metrics you care about. If you want to freeze fast motion, a camera is your best bet. But if you want to see high contrast areas (e.g. looking outside a window from a dark room), you've got a HUGE leg up on cameras. For example, high-end cameras tend to feature maybe 15 stops of dynamic range, while humans eyes can manage up to 24 stops (a "stop" is a doubling or halving of light values.)
Additionally, the human eye has a resolution of approximately 576 MP. This is one reason why we can often see details in the distance that disappear in a photo.
Finally, while it's arguably not "better," the brain processes images very differently. This is another reason why the image you take often looks "worse" than what you saw in person, or why you can't get the colors to look "quite right", etc. If you get into photography, you start to "see" the things your eye was previously rewriting for you (like a green color cast on skin when you're in the forest) - but it's not the natural way your brain process information.
You can also look up estimates on the processing rate of human's sensory system - it's quite impressive.
Philosophy of Mind papers uses that kind of language all the time. It's agreed that humans have sensory input and social interaction, those are facts of biology, psychology and sociology. It's also agreed that human bodies and brains are different in significant ways from modern computers and robots.