Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Indeed. The author seems to think he has a knock-down argument with this:

> If you disagree, your counter-argument should probably start by outlining some proposed input or output of the brain that it would not be possible to encode numerically

Which, even speaking as a Physicalist (though leaning agnostic) I think sounds quite unaware of all the hard thinking that has been done so far on the subject. (AKA the Hard Problem of consciousness.)

Here’s one to get started: the qualia of the color red.

It’s not established that just encoding information about the world with enough complexity would produce a mind that experiences qualia, and we think those are quite important as humans.

If you need a couple thought experiments along this line of reasoning, consider p-zombies, or alternatively, imagine if consciousness is actually some sort of physical field in one of the microscopic string-theory dimensions that we can’t access yet. So it’s fully objectively detectable and requires a specific physical structure to generate. The proposed system of philosophy of mind just ignores these possibilities and asserts that they don’t obtain, without evidence or justification. (Of course I make no claim that this is how things are; I’m just pointing out the valid potential-physics that are being ruled out from an armchair.)



Specifically on the topic of p-zombies:

I can't remember where I read it, but a convincing argument against the possibiltiy of p-zombies is along the lines of "why would p-zombies talk about consciousness?".

When you or I talk about our internal conscious experience, we're examining our conscious experience and then talking about what we examined. A p-zombie would have to conjure up stories about conscious experience from thin air.

For a p-zombie to behave the same as an ordinary person, the p-zombie's words must be uncoupled to the internal experience (because there is none). But since the words are therefore shown to be produced independently of the conscious experience, it would seem to be an unlikely coincidence that in the non-p-zombie population, the words just happen to perfectly reflect the lived internal experience, even though they have been shown to be uncoupled, because the words persist even when the internal experience doesn't.

Since unlikely coincidences are unlikely, the more likely explanation is that p-zombies are impossible, because your outward behaviour is (at least partly) caused by your internal experience, and without that internal experience the outward behaviour could not be the same.

EDIT: I think it's from here: https://www.lesswrong.com/posts/kYAuNJX2ecH2uFqZ9/the-genera...


Right, as a physicalist I'd agree with the line of reasoning that an atom-for-atom identical configuration should have identical experiences, including subjective qualia (or, if we build an atom-for-atom exact copy, it would honestly report no qualia if they are produced by some dualist "soul" or other non-physicalist-explanation that is missing in our copy).

I think the p-zombie thought experiment is useful as an intuition pump in a few ways though; the other is to consider what would be the "most similar" thing to us that doesn't have qualia. Sure, it's not an atom-by-atom identical thing. But what if we do a sort of "Chinese room" scenario and train an AI to perfectly replicate a human mind, situated in a human body?

Currently, AIs are trained to predict/complete human utterances, and to do this task well requires a sophisticated theory of mind. Possibly, to do it perfectly requires (a simulation of) full consciousness, we’ll see. But if we train an AI to predict human utterances it will say things like “I feel pleasure when X” or “Y produces a subjective experience of red”, since those utterances are in the training set. And yet, this AI might not actually have qualia. (Indeed this is the default explanation for an AI's behaviors/utterances.)

We can also consider a sort of Ship of Theseus argument here too; if we modify a mind-body atom-by-atom, what's the shortest edit path to a mind that does not have qualia? Or, in the other direction, what's the edit path from the above qualia-less AI to one with qualia?

So I think Eleizer's argument is sound that an atom-for-atom copy ("neurological zombie" I believe is the strict term when disambiguating zombie types) would not report qualia. But the concept-space is still useful, if we consider the adjacent "dishonest AI zombies" AKA "behavioral zombies" (noting that "neurological zombies" are but one formulation of the p-zombie concept used by philosophers over the years, albeit the best-known).

It seems clear to me that in humans, outward behavior is causally influenced by the experience of qualia. But there are potentially other (dishonest/deceptive) mind-constructions that could produce the same behavior without qualia. Probably not parsimoniously though!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: