> I claim that a brain is purely an information processing machine
You're not exactly wrong, James, but you don't realise how much you're already assuming what you're trying to argue for.
Information isn't merely some object in pure mathematics. In that -log(p(e)), you have already decided what outcomes to see as the same. There's no "pure information", it's always information over the space of events we care to distinguish as different.
When you've decided what differences in the universe are worth caring about, what further insight is there to be gained from declaring brains to be information processors? Given a sufficiently odd (and specific) choice of what differences to care about, you could probably truthfully declare damn near anything to be an information processor.
Yes, this is an essential point, and the great circularity is not at all obvious. You can't start by talking about "inputs" and "outputs". At that stage you're already talking about "the mind" in a reality furnished to you by the mind, and perhaps little to do with what the mind does or how to "carve reality at the joints". At level zero we have no insight at all into what Kant called "the thing in itself" -- our reasoning begins at pre-furnished objects.
It is not at all obvious that we have the preconditions necessary to discover "what minds are" or "how they work".
>At that stage you're already talking about "the mind" in a reality furnished to you by the mind
While we presumably need minds to pick a set of distinctions for our purposes, it is not at all required that a mind picks such a set for the sake of analyzing the behavior of a system that processes such sets. A function, for example, requires an input to provide its output, yet we can analyze a function by abstracting over the input space and analyzing the transformations on an arbitrary input. This picks out the function uniquely without relying on any specifically chosen inputs. The same can be said for an "information processing" function on the space of possible sets of distinctions.
> The same can be said for an "information processing" function on the space of possible sets of distinctions.
No, I don't think it can. "The space of possible sets of distinctions" is one of those nastily large things I'm not even sure is a set. I don't think you will be able to rule out that the drafts in a storm or the swirls in a stream aren't carrying out a computation, if you try to do it over all possible sets of distinctions.
>is one of those nastily large things I'm not even sure is a set.
I don't see that it needs to be. The issue is the nature of the function, not the nature of the entire space of possible input to the function.
>I don't think you will be able to rule out that the drafts in a storm or the swirls in a stream aren't carrying out a computation, if you try to do it over all possible sets of distinctions.
You can if you consider the fact that computation has the counterfactual property that different inputs result in different meaningful outputs. Sure, given any arbitrary (complex) dynamic and target output, you can find an input that maps to that output. But given some restricted set of dynamics (e.g. what can be classified as a hurricane), it is safe to say there is no meaningful space of inputs/outputs for which the hurricane will provide a semantic mapping between the input/output spaces (not just a single point in that space, but the entire space). The entropy of the process is just too high to support such a mapping.
Indeed. The author seems to think he has a knock-down argument with this:
> If you disagree, your counter-argument should probably start by outlining some proposed input or output of the brain that it would not be possible to encode numerically
Which, even speaking as a Physicalist (though leaning agnostic) I think sounds quite unaware of all the hard thinking that has been done so far on the subject. (AKA the Hard Problem of consciousness.)
Here’s one to get started: the qualia of the color red.
It’s not established that just encoding information about the world with enough complexity would produce a mind that experiences qualia, and we think those are quite important as humans.
If you need a couple thought experiments along this line of reasoning, consider p-zombies, or alternatively, imagine if consciousness is actually some sort of physical field in one of the microscopic string-theory dimensions that we can’t access yet. So it’s fully objectively detectable and requires a specific physical structure to generate. The proposed system of philosophy of mind just ignores these possibilities and asserts that they don’t obtain, without evidence or justification. (Of course I make no claim that this is how things are; I’m just pointing out the valid potential-physics that are being ruled out from an armchair.)
I can't remember where I read it, but a convincing argument against the possibiltiy of p-zombies is along the lines of "why would p-zombies talk about consciousness?".
When you or I talk about our internal conscious experience, we're examining our conscious experience and then talking about what we examined. A p-zombie would have to conjure up stories about conscious experience from thin air.
For a p-zombie to behave the same as an ordinary person, the p-zombie's words must be uncoupled to the internal experience (because there is none). But since the words are therefore shown to be produced independently of the conscious experience, it would seem to be an unlikely coincidence that in the non-p-zombie population, the words just happen to perfectly reflect the lived internal experience, even though they have been shown to be uncoupled, because the words persist even when the internal experience doesn't.
Since unlikely coincidences are unlikely, the more likely explanation is that p-zombies are impossible, because your outward behaviour is (at least partly) caused by your internal experience, and without that internal experience the outward behaviour could not be the same.
Right, as a physicalist I'd agree with the line of reasoning that an atom-for-atom identical configuration should have identical experiences, including subjective qualia (or, if we build an atom-for-atom exact copy, it would honestly report no qualia if they are produced by some dualist "soul" or other non-physicalist-explanation that is missing in our copy).
I think the p-zombie thought experiment is useful as an intuition pump in a few ways though; the other is to consider what would be the "most similar" thing to us that doesn't have qualia. Sure, it's not an atom-by-atom identical thing. But what if we do a sort of "Chinese room" scenario and train an AI to perfectly replicate a human mind, situated in a human body?
Currently, AIs are trained to predict/complete human utterances, and to do this task well requires a sophisticated theory of mind. Possibly, to do it perfectly requires (a simulation of) full consciousness, we’ll see. But if we train an AI to predict human utterances it will say things like “I feel pleasure when X” or “Y produces a subjective experience of red”, since those utterances are in the training set. And yet, this AI might not actually have qualia. (Indeed this is the default explanation for an AI's behaviors/utterances.)
We can also consider a sort of Ship of Theseus argument here too; if we modify a mind-body atom-by-atom, what's the shortest edit path to a mind that does not have qualia? Or, in the other direction, what's the edit path from the above qualia-less AI to one with qualia?
So I think Eleizer's argument is sound that an atom-for-atom copy ("neurological zombie" I believe is the strict term when disambiguating zombie types) would not report qualia. But the concept-space is still useful, if we consider the adjacent "dishonest AI zombies" AKA "behavioral zombies" (noting that "neurological zombies" are but one formulation of the p-zombie concept used by philosophers over the years, albeit the best-known).
It seems clear to me that in humans, outward behavior is causally influenced by the experience of qualia. But there are potentially other (dishonest/deceptive) mind-constructions that could produce the same behavior without qualia. Probably not parsimoniously though!
>When you've decided what differences in the universe are worth caring about, what further insight is there to be gained from declaring brains to be information processors?
Yes we pick our set of relevant differences. The useful thing about information processors is that after undergoing a physical transformation along its entropy gradient, the final state also picks out informative/semantically relevant states from the original space of distinctions. Information processors are informative, they have the ability to tell you something you didn't already know. That is, they manifest new informative states. The space of physical systems that have this property are vanishingly small. Most systems destroy information as it is pulled along its entropy gradient. Computers are perhaps the sole exception.
Couple issues. One is there is a pernicious view propagated by certain philosophers that consciousness and self are the artifacts and epiphenomena of language. The irony is that in this view, a domesticated animal is more conscious than a wild one because it has been socialized, strongly implying consciousness is a measure of material subordination. Cui bono on that one, I would ask. The other is that the idea of "mind," exists on a scalar measure of consciousness, and that by imitating human behavior, it can somehow appear on a scale of equivalence to humans. Oh, really? If we can be measured as equivalent to our imitations and simulations, one should wonder what that makes us.
A lot of these musings about artificial consciousness are a proxy for justifying a materialist ontology that mainly serves to decouple people from an ideal or exogenous morality, and it makes them view each other on a spectrum of less than human livestock. I suspect Kant is to blame, and while the materialist view of consciousness drives some interesting tech, it is also the foundation for some really obvious and forseeable unforced evil - as we are seeing now with the rise of technocratic authoritarianism a mere ~35 years after it it yielded its wall.
I'd also thank the author for walking right into this like that, as I think a modern conversation about the consequences of materialist ontologies is coming due.
A slight aside - rather than language being the 'core of consciousness', it's the only tool available for one conscious entity to communicate the phenomenon of consciousness to another conscious entity.
If consciousness is the self-awareness that one is using sensory information to construct an internal model of reality, of the world surrounding one self, and that the internal model may be (or certainly is) an imperfect representation of that world, then language, symbolic concepts etc., are required to express that notion to others.
One other note - many if not most wild animals are highly socialized, just not in the sense that they'd obey instructions given by humans (but perhaps those given by a pack/herd leader, or a sentry on the lookout for predators, etc.)
I agree about Kant, though, and the failure of the rigid materialist viewpoint. Kant's materialism was based in an outmoded view of mathematical-physical rigor, which has been overturned everywhere, from non-Euclidean geometry to the decidability (Church-Turing) and incompleteness (Godel) issues, to quantum intederminancy and chaos/sensitive dependence - the rug was pulled out from under the Kantian materialists some time ago.
> it's the only tool available for one conscious entity to communicate the phenomenon of consciousness to another conscious entity.
While I appreciated your comment, on this item, I disagree. Though language is absolutely a tool evolved by consciousness(s) to act on others, it only passes the test of being sufficient or necessary to do so when we expand the definition of language to include all communications - which makes it a circular definition.
I routinely relate to other consciousnesses (advanced training of horses) without language. In fact, I do it because it is a way of relating without the filter of languge and its key artifact, ego. It could be said we develop an individual languge together, and that the logic I use is a language to communicate consistently with different horses, but whether the gestures I use is an invented language to affect responses, or a discovered communion to transmit the effect of my intentions is a pretty huge question.
By example, does the act of sex exist without language? Since simple organisms do it, I would say yes it does, and that there is a demonstrable way to relate physically without language, it shows language is not a necessary condition of either consciousness or of relating to other consciousness. Essentially, I don't think one can say that language is a necessary condition of consciousness without a circular definition where language incorporates everything that is evidence of consciousness.
What it means is that there is a substrate of being and consciousness below that of language, and that all things that are the artifacts of language are logically separate from those that are the artifacts of experience. You can use language to convince someone they are a dog, but they are not a dog, even if they think they are, because there is a reality beneath the artifacts of language that is both accessible and immutable. There is a real, and the consequences of apprehending that axiom are pretty profound.
I really like when philosophers misatribute the current issues of society to some sort of a "wrong choice of philosophy" kind of situation.
Jesus Christ, the reason "technocratic authoritanianism" is in action is because tech is the latest niche on the bandwagon of rich nouveau.
Of course these would be correlated as people look up to people who recently became billionaires.
Everybody at the age of 40 could think "If I made all the right choices I would have been them".
On top of this, the whole tech industry is still operating in legally gray areas as we didn't have enough time to build our legal codexes to deal with these issues.
Here's your technical auhtoritanianism. You could say the same about East India Company or Big Oil in the past.
You may be able to simulate a magnet perfectly, but that doesn't make magnetism. In the same way, you may be able to simulate a mind perfectly, but that doesn't make consciousness.
With consciousness, it's about experiencing sensations, what things feel like from the inside, not about any behaviour on the outside.
I think you've exposed the dis-analogy by making this argument.
A magnet is the cause/source/reason for/generator of/<insert verb> of a magnetic field. If you put a test charge in a field you can detect a magnet.
So you argue, no simulation of a magnet produces a magnetic field. I assume you mean that you can produce data which would characterize a field were the real world in the state that you simulated.
Then you say a simulated mind doesn't produce consciousness because consciousness is internal not external. Simulating a mind and interacting with it is just behavior checks.
This doesn't make sense then. With magnets you say, "there needs to be a physical field present." But with minds you say, "there needs to be undetectable internal experiences, just reporting and observing behavior isn't consciousness"
This analogy just begs the question. The issue of consciousness is whether it is purely a functional/dynamic state of a system or whether it is a basic atomic kind that cannot be further decomposed. For example, a simulation of carbon isn't carbon but a simulation of disorder is in fact disordered in some sense.
I don't think it is very brave to claim the same hypothesis that Turing had in 50's. And Cognitive science basically as a whole studies this hypothesis.
> "Why do I think that? I claim that a brain is purely an information-processing machine. (If you disagree, your counter-argument should probably start by outlining some proposed input or output of the brain that it would not be possible to encode numerically)"
A 'model-constructing machine' has to be included as a submodule of the 'information-processing machine', doesn't it? The information coming in is all sensory - signals from optical nerves, auditory nerves, and tactile and olfactory and gustoroy nerves. The information going out is signals to the eye muscles (where to look to get more important information), the vocal cords (to communicate), and all the other muscles - to move around, to use tools, and so on.
However this output is all based on the ability to construct a coherent internal model of the world around you (this is perhaps clearer if you consider the difficulties blind people have to overcome). This occurs at a low level in all animals (as for example, hand-eye coordination, or a bird catching a fish). Consciousness could even be defined as one's own awareness that one is constructing a model using sensory information, and that it's possible we could be getting the model wrong (due to poor information inputs, i.e. reading unreliable historical narratives etc.)
Thus an artificial human-level consciousness would have to be aware that it was constructing this model of reality using information sources that might or might not be entirely reliable - and it would also certainly require some degree of autonomy, i.e. the ability to use that internal model to have agency in the world - to move about freely, speak, decide what to look at and - perhaps most alarmingly for some - decide freely what instructions to follow and which to ignore.
I agree with everything you wrote & don't think it is inconsistent with what I wrote. The "model-constructing machine" is itself purely an information-processing machine, so embedding it inside a Turing machine is no problem.
There are non-physical attributes ascribed to humans besides intelligence that may be interesting to consider. Can there be such a thing as artificial wisdom? Artificial love? Artificial enlightenment? Artificial kindness? My initial reaction is that it feels like a categorical error to say that these might be characteristics of a Turing machine. There are a couple of ways to understand these phrases. If I were to ascribe "artificial enlightenment" to a human, I would be saying that they are faking something, engaging in a kind of subterfuge. On the other hand, I might be able to imagine a digital creation that has been given a sense of self, is aware of its own existence, has an ability or propensity for self-preservation, and yet reasons its way to a state of selfless kindness and generosity. I find myself in a state of "quantum superposition" as to which side of this issue I fall on!
How would you measure in the |alive+dead> , |alive-dead> basis for the cat? What does that detector look like? Does the existence of a basis imply a constructible detector?
As I understand consciousness is the ability of perceiving our own thoughts.
This seems to us very important because thanks to this ability we are aware of thinking, of existing, of being a self. Ref: Descartes' cogito ergo sum: I think therefore I am.
We have no idea of how the brain does that. It's mind boggling (pun intended). Chalmers say it's "the hard problem of consciousness".
However for a computer this is a piece of cake. Activity Monitor does that super easily. And debuggers go very deep inside the 'thought' process. You can even run a debugger on itself without sweating too much.
To fly we definitely don't need to imitate birds. For consciousness we don't need to imitate humain brains.
Maybe you disagree about the definition of consciousness and want to make it something akin of a soul. However it's hard to deny that we have no idea how the brain's equivalent of Activity Monitor is 'implemented'.
A bit of a basic analysis -- minds are brains and brains are just the go between some inputs and outputs, therefore they are just some sort of mapping, therefore Turing, therefore brains are computers.
Conscience is the obvious problem to that analysis since there is no good reason that any sort of mechanism may give rise to self-awareness. Roger Penrose posits that's perhaps because it is a fundamental property of the Universe given expression by the arrangement of the brain. However, if that is the case the brain may not be "yet another substrate" for a mind -- it may be key -- and mind may not be a mapping between inputs and outputs -- it may be elementary.
Another poster put it quite brilliantly: "perhaps creating a mind in software is like trying to create a magnet in software".
I believe Antonio Damasio, a neuroscientist, would have a different opinion. To everyone interested out there, i suggest reading The strange order of things.
It's about the development of emotions and feelings, culture and ideology, technology and whatever else makes as human through the lenses of evolutionary biology, namely through the development of the neurological systems.
https://www.goodreads.com/book/show/32335976-the-strange-ord...
I more or less agree with the approach advocated (not that anyone asked). Reasoning out how to produce the fundamental state of awareness (“the lights are on”) itself is the tricky bit. All the phenomena that occur within that are “easy”. Abstracting away all of the parts that “aren’t relevant” helps to distill to the core idea, which is that we ought to be able to determine how to produce awareness via software. Unless, of course, awareness is not a function of information processing (ex: “souls” are involved), or some bit of the processing machinery was relevant and needs to be incorporated. Meh.
Lately I arrived to the hypothesis that our minds are nothing else than an illusion. Of course, a very useful one.
Why? Because I can draw an analogy to a video game character with some shocking sucess.
DISCLAIMER: I just like to overthink this for fun, I am no scientist and I am not affirming facts, just throwing some idea around at the end of my workday. :)
If you play games, think on a first-person shooter (FPS). When you embody a character on those games, it has a certain "amount of life" that can be drained or filled. If you take too much damage you die.
In case another character is shooting at me, most modern games will give you several hints that you should pay attention to take action and save yourself: the character's vision can start to fade, the sounds starts getting muffled and you will get visual hints from where you might have been getting your shots from. If you hide yourself, after some time you will start to recover your "life bar". You normally get a positive visual hint that tells you that.
Same for NPCs: if you shoot at them, their "life bar" will decrease and, if the game as a good ~AI~, the NPC will try to save themselves: hide, flank you, whatever.
Now think about our human brain and how we learned in recent years how our dopamine neurotransmitter works: it is a neurotransmitter that modulates both our "pleasure" and "pain" sensations, among many other things. If we do things that are very bad for us, we feel "pain" and we learn we shouldn't do more of those. Opposite is also true: if we do things that are nice for us, it makes us learn to do more of that.
Why those negative or positive hints we see in our screens when playing such shooter games wouldn't be conceptually the same as what we experience as human beings? When I play such games I can see superb image quality and simulated human behavior, and all that is running in a hardware just in front of my eyes with some shiny RGBs.
Why the pain I feel "in reality" would be any different than the "pain" experienced by those virtual characters? Both have the purpose to push the organism, virtual or real, to perform certain actions.
Perhaps we will discover soon that our minds are nothing but a well-constructed illusion that is being created by our brains when we're awaken. Which is nothing short of just amazing, by the way.
1) It ignores the existence of critical thresholds. I can keep cooling water by 1 degree and it will keep being water until suddenly it is not. We have no reason to believe consciousness does or doesn't work the same way.
2) Pain and pleasure are both motivators for an agent to react to. There is nothing to differentiate the two in n NPCs code. Is it pain they are running away from, or pleasure they are running toward? It doesn't make sense to speculate because really it's just some number causing some predetermined animation to run. A grain of sand is more computationally complex than a video game NPC.
Subjective experience is a great platform for evolution to build survival tools on: pleasure, pain, fear, etc. Any of these is possible:
1) Evolution stumbled upon subjectivity as an architectural abnormality, and built tools on top of it.
2) Evolution stumbled on subjectivity as a critical threshold of processing power, and built tools on top of it.
3) Evolution built tools and subjectivity is a byproduct.
> You would have no idea about the unimaginable tower of software complexity sitting underneath Firefox
A browser is the most complex thing sitting on a computer. That and the OS. Infact you can even have an OS in the browser[0] if you want to get meta and go full Jean Baudrillard.
Consciousness is inherently temporal. Our "AI-s" should not be prompt-based, but have an event loop which is always "thinking" about the things that are encoded in it, inputs it's currently receiving, and previous inputs it has received so far.
Waves are characterized as spatially propagating phenomenon. Maybe you mean oscillation? Sure time varying values exist. What work is the word inherently doing? An how does that relate to consciousness?
The concept of propagation depends on the concept of duration. The point is to demonstrate an entity that is constituted by duration. The relation to consciousness is that some take it to be also inherently constituted by duration. In other words, consciousness is inherently a process/dynamic. E.g. if you suspend the movement of molecules in a brain such that it contains no dynamics, is the brain undergoing any conscious experience during this period of suspended animation? Some think the answer is no.
It's the opposite. We'll create software that'll run in our minds.
Install a plugin to overlay data in your vision, run shell commands by just thinking about them, maybe even run a mini neural net that's like a fully private assistant etc.
Really good. I'm a computer scientist who takes a passing interest in the subject and usually I'm just disappointed by the computer science errors and the wild claims that it leads to. This, however, cuts to the heart of the issue, how do you measure consciousness? It's the same question Turing tried to answer with the Turing test but now that language models are starting to pass it, despite being way below a dog according to our intuition, I think it's a question needs asking again and not an easy one.
There is one mistake in there, more of an oversight really and one that's tangential to the main point but I think it's important to understanding the issue in general. To say the brain computes a function is meaningless. When I say my computer is computing a function, let's say 1+1=2, implicit in that is the idea that high electric on a wire means 1 and low electric means 0 and we can arrange these into binary to make a 2. Computer science is pure mathematics. It's only capable of symbolic manipulation. It can do 1 and 0 but not electric or neuron. Any relation from the second to the first must be defined and this is true of any bringing mathematics into practice. For most of science and engineering this means units of measurement. We use bits of information but it's not at all clear how that should be applied to a brain. The first question is what properties of a brain matter for the calculation we care about? As one of the other commenters said, "There's no "pure information", it's always information over the space of events we care to distinguish as different."
Worse, any relation is arbitrary. Computers are all very neatly arranged but they don't need to be. I could watch any physical process and map each of its states to the states in any Turing machine and call it a computer performing a computation of my choice and from a theoretical perspective there's nothing to distinguish that from me declaring my computing is computing 1+1=2. Moreover, humans have a pressing need to keep things practical, ordered and understandable for other humans. Evolution needs only practical.
This actual comes up in practical computing more than you might expect. Imagine my 1+1=2 is part of a program keeping track of flea populations. It's very hard to track fleas individually so we round to the nearest thousand. Did I calculate 1+1=2 or was it ~1000+~1000=~2000? It's both really, there's a symbolic level where it's just 1+1 and a practical level where it's a thousand fleas. To further confuse things, if I got my C compiler out and wrote some natural code that included a function that given a person would return their age (as references in both cases for the pedants) and then wrote a different function that given an integer will add 4 to it. There's a very real chance those two functions will produce identical machine code.
The computational stack leading from metal to firefox is presented as big question marks but it's not. It's very well known, we literally made it up ourselves. Humanity knows it as well as Tolkien knew middle earth. Not only could we describe its makeup and workings in all level of detail, we prescribed them in meticulous detail. The same can't be said of brains.
I've heard a lot of this referred to as the triviality argument. That is, it's trivial to say something is a computer doing a computation, but then I find a lot of the conclusions drawn from this to be off. It's presented as if it undermines the idea of the brain as a computer, when on the contrary it confirms it. The computer science behind explaining this gets a bit heavier, but I think even without it you can smell something is off here. It's the same as with Xeno's paradox, even if you know nothing about math, you know the tortoise moves. Brains are not merely a trivial computer. They look like some maybe very alien but still genuinely structured and practical information processing systems and these workings are clearly strongly related to our behaviors and consciousness.
Going into the heavier comp sci. Brains are definitely not Turing machines. Neither is my computer. Turing machines can't exist. The part where it says "infinite tape" is a hard requirement. Any finite limit implies that a system has no more computing power than a finite state automaton. Fun fact: Such systems are incapable of doing basic arithmetic. If you're having trouble believing that your computer can't do arithmetic try adding something to 2^n where n is the number of bits of memory you have. This may sound bad but it's great. They're still powerful enough to do whatever they like within their finite bounds and things like the halting problem, Godel's incompleteness theorem? Not applicable. Given some assumptions like the universe is finite and consciousness can be represented symbolically, it becomes very difficult to argue that a relation between the two couldn't be represented computationally.
But it's not enough to know that such a mapping must exist or that it must be representable with simple computation. The idea that there are a vast number of possible mappings is still somewhat relevant, but no more than saying to a physicist "I could invent a world where gravity works differently" or even "There are many different models for reality" and like the physicists it's going to take experiments to sort out which one is the correct one. The key to any experiment is good measurement and so we come inexorably back to the heart of the issue. How do we measure this?
You're not exactly wrong, James, but you don't realise how much you're already assuming what you're trying to argue for.
Information isn't merely some object in pure mathematics. In that -log(p(e)), you have already decided what outcomes to see as the same. There's no "pure information", it's always information over the space of events we care to distinguish as different.
When you've decided what differences in the universe are worth caring about, what further insight is there to be gained from declaring brains to be information processors? Given a sufficiently odd (and specific) choice of what differences to care about, you could probably truthfully declare damn near anything to be an information processor.