> what largely transformed Weizenbaum into an outspoken critic of AI ... was his revelation that even once the processes were explained many people still bought into the “illusion.” ... that even many people who understood the inner workings of computers quite well could still get swept away as well. Weizenbaum observed that ... “if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgement deserving of credibility... A certain danger lurks there.”
This speaks to my view exactly, and I haven't seen it put better.
The latent (imv) schizophrenic impulse to impart consciousness to any-old-object in the world is here being exploited by peddlers (often charlatans) of technology. This makes the "confirmation bias" here severe and dangerous.
People don't just want to believe, their animal psychology primes them for (over-)imparting consciousness. This is a specific cognitive vulnerability that is highly pernicious.
Even, trivially, consider "deresponsibilization", that process by which people take whatever-the-machine-says to be "The Answer". Why? Well, the machine "must know".
Less trivially, moral bias: "presumably" the machine is "objective" and so is morally superior.
I'm not so quick to attach a judgement to anthropomorphization. Well okay, I am, but not because I think it's wrong.
After all, panpsychism has not been defeated as a philosophical standpoint [0]. Perhaps any-old-object in the world is actually conscius, just not all of us can detect it.
That the impulse to see consciousness in a thing is being used by charlatan is unfortunate, but has no bearing on the actual truth of whether the thing actually is conscious.
I think part of the scam is the move from "practical definitions" of words to (pseudo-) philosophical ones.
There ought be a name for this type of psychological manipulation, but I don't know it. It is universal in a certain type of "motivational scam".
Crypto Scammer: Isn't it bad that finance is so centralised?
Crypto Scammer: Let me sell you a decentralized database!
Sceptic: But that isn't the same sense of decentarlization?
Scammer: Ah! But doesnt power flow from centralization of infrastructure?
Sceptic:... maybe, maybe not. Hey, why do you have your hand in my pocket?
Here, whether "consciousness" is on a continuum across all physical objects, or whether only a subset of objects have it, is kinda irrelevant:
The machine doesnt care about you, has no intentions, has nothing it wishes to say, and doesnt know anything about your or anything else. Its text generation is only a reflection of you, and the average of the superficial data of everyone else around you.
>Prove me conscious, and I'll start taking you seriously.
You realise, this is the same fallacy?
You've shifted the burden of proof on to some bizarre philosophical ground so that you dont have to defend your trick magic box.
A priori, by common sense, you're conscious. A priori, by common sense, a machine is not. There lie the default burdens of proof. From this the sales person must proceed to convince me, not vice versa.
> That the impulse to see consciousness in a thing is being used by charlatan is unfortunate, but has no bearing on the actual truth of whether the thing actually is conscious.
It has quite a bit of bearing, given that the reason we have discussions of consciousness at all in our own perception (real or imagined) of our own awareness being conscious. This arises from the very same brain / self that is innately driven to pareidolia. Literally the mechanisms that let us perceive consciousness arising (or project consciousness onto) other humans are the same ones responsible for pareidolia.
I'm only addressing a fraction of your many excellent points, but: this is actually why uncontrolled AGI is so spooky. It's the prospect of a future where space gets colonized-or whatever crazy goal(s)-but there's "nobody home", no qualia, no experiencer.
The original point stands even if AGI isn't possible in the strong sense. Self replicating embodied AI robots (in the contemporary machine learning / deep learning limited sense) may well be possible, and dangerous while being cognitively limited / affectively absent.
>The latent (imv) schizophrenic impulse to impart consciousness to any-old-object in the world is here being exploited by peddlers (often charlatans) of technology. This makes the "confirmation bias" here severe and dangerous.
Have any examples of charlatans peddling "conscious" AIs?
Even when no direct claims about consciousness are being made, that is not stopping people from behaving as if they are conscious programs.
I think we're seeing this playing out in discussions about generative AI with people justifying certain behaviors regarding the use of artist's data in training models by comparing the program to a human going through the process of "learning" and "gaining inspiration" from the work of others. Some people seem fully convinced that what these programs are doing is equivalent to human behavior to a degree that would qualify the program to receive human-like considerations, i.e. treating the software like an entity instead of just a program/tool to be wielded by the end-user.
I think this is one of the most pernicious issues, and people seem to have a hard time recognizing the trap they've fallen into.
I think this is also an outcome of this software being sufficiently complex to be too hard to understand without investing time and effort in understanding it.
I think this results in the mis-attribution of intelligence when instead it's just a really clever piece of software. But the casual commenter cannot estimate the gap between "pretty clever" and "so complex that it's actual intelligence".
But I think in many ways you're falling into the same trap...
So, let me set the first trap for you. Give me a logically/mathematically defined definition of intelligence and one of consciousness. Unfortunately after decades of back and forth on this we typically end up with answers that fall out of science and into the "I'll know it when I see it territory".
If you are going to describe machine intelligence/consciousness your definition must be able to cover intelligent behaviors we see from single cell life to the massive complexity we see in humans. Attempting to handwave and say "it's not as complex as humans therefore it's not intelligent/conscious" is a complete and total failure from my point of view.
If I'm falling into a trap, it certainly cannot be the same one.
How is your trap functioning? Are you saying that because we cannot precisely define consciousness, we cannot make any conclusions about it at all? Or that conversely, we should assume everything is conscious?
The scientific community continues to operate without certainty in many major areas that are deeply consequential, but that does not prevent us from exploring the problem space with the tools we do have.
There is plenty that we do know about the subjective experience of consciousness in biological creatures, human and otherwise. For it to even matter that a computer might be conscious is a construction of our subjective reality and our intuitions about why it is meaningful that something is conscious.
We've studied the relative complexity of thousands of species and understand enough to know that some species are a lot closer to humans than others.
But all of that is a giant deviation, and I'd argue has no bearing on the core point: the entire notion of copyright and the legal system it is built on are deeply, intrinsically, inherently human, and originate from the framework of human subjective consciousness, individual and collective. If the fact that the software is AI has any bearing on whether or not the unlicensed use of artist's content is acceptable or not, it must imply some elevated status of the software above ordinary software. It is that elevation that must be explained, and the explanations thus far have all been some form of "it's learning like a human".
Even if we were talking about a fully conscious AGI right now, we'd still need to have a conversation about what its consciousness means, and in what ways it is or is not compatible with human consciousness. Before that, we'd need to have a conversation about the ethics of commanding conscious AIs to do our bidding, but I digress.
We know not all consciousness is the same because we know to avoid grizzly bears.
Unless you're making an argument for Panpsychism, in which case this is an entirely different conversation :)
I think you are misreading that argument. If I understand correctly the point is that you are already allowed to look at and make derivative works of art - the machine version of that is not fundamentally any different, especially since it is not reproducing works in whole but rather reproducing a 'style'.
That isnt an endorsement of the argument, my point is that you dont have to believe the black box has any independent intelligence to draw an analogy between what it does and what we already allow.
I understand their argument, but I'm arguing that their argument must fundamentally imply some form of underlying consciousness, or at least some before-now-nonexistent property that elevates it above an ordinary computer program and makes it compatible with a reading/interpretation of the law independent of the material differences between the AI program and a human.
Setting aside the bandwidth and compute issues for a moment, if Stable Diffusion was a tool that when prompted, downloaded and ingested 2.4 billion images (regardless of license), ran some really complex algorithms on them, and then spit out a derivative result - in other words, take out the AI - I think people would view the tool very differently.
At some point along the way, it seems people jump to a belief that because of <some component / step in the process>, this is no longer just a computer program that scraped the entire Internet without asking.
> Some people seem fully convinced that what these programs are doing is equivalent to human behavior to a degree that would qualify the program to receive human-like considerations, i.e. treating the software like an entity instead of just a program/tool to be wielded by the end-user.
In recent threads about how the legal system will interpret what Stable Diffusion is doing from a copyright perspective, multiple commenters were making the argument that the system is no different than a human learning and drawing inspiration from artwork.
Some went further to claim that we don't know enough about consciousness to make any judgements about the nature of this software. As if somehow, this lack of knowledge implies the system must be conscious by default. It's a weird line of argument, but points to how strongly people feel pulled to confer consciousness on things they do not understand.
Maybe the panpsychists have it right, but that's an argument to be had at a much broader level and unrelated to AI, but that seems akin to living your life according to Pascal's wager. And in the case of AI, the cost of treating it with this reference could be great, and the potential for abuse greater.
We've seen this in deities for as long as we've been around, royalties then took the crown, later subsumed by governments, then the daft notion of non-human personalities called companies. Now, aproximate intelligence systems.
Including the dataset of lives lived under slavery or cut short by genocide makes me come to radically different conclusions with regard to how prone to over-imparting consciousness humans tend to be.
Adjusting exploited entities by probability of exploitation and adjusting advantaged entities by probability of advantage also leads me to radically different conclusions. For example, Ray Dalio is one advocate for using AI to supplement decisions. He is also exceedingly rich and a large part of his fortune was made with the help of investment decisions predicated on AI reasoning combined with human reasoning. Even though on average people who try to benefit from AI might not, the average benefit can still be massively positive, because the outcome distribution isn't uniform. Losing means losing a small fraction of a small amount, but winning means gaining an extreme amount to such an extent that it pays for the bad bets. This appears to often happen in practice - see the venture industries economics for evidence of this - and I don't see why it shouldn't apply to AI, which is something present in most unicorns that people often point to when discussing this concept.
> how prone to over-imparting consciousness humans tend to be
gp was talking about humans' attribution of consciousness to inanimate objects, not other humans.
Nearly all religions apart from the more abstract big three would be counterexamples to your example. Dryads and naiads, all kinds of shrines and oracles and talismans...
> The latent schizophrenic impulse to impart consciousness to any-old-object in the world is here being exploited by peddlers of technology.
He is talking about the concept of exploitability. I'm addressing that context by expanding it to focus on other relevant parts of the classification task - focusing on the false positives screens off information that is critical for estimating the exploitability of a strategy. When you include other things, like true positives and true negatives, you get a very different expected outcome. Notable, you can't not do this and be reasoning correctly. One of the interesting and counterintuitive aspects of decision making under imperfect information which differentiates it with decision making under perfect information is that decision points influence your expected value calculations even when you are not in that situation. For example, if you are investing in a startup A and it is going to fail, that doesn't mean you shouldn't invest in that startup, because the results of investing in startup B, startup C, and startup D all have an impact on whether you should invest in startup A. For good writing on this, see Paul Graham. For great writing on this, read research in game theoretic reasoning.
But all of this is pointless to talk about if we don't actually disagree. So... If you really really disagree with me, in my world, the best way for us to settle this is with a bet. I'll bet you $10,000 that you, as a human playing without the aid of software, will lose to a chess playing program of my choosing which has the property of reasoning about your moves with the expectation that you are a stronger player than you actually are. I think the contention that an agent which overestimates others being exploitable is false. A happy accident here is that if I'm wrong to be at all impressed with AI, you shouldn't just be able to win because of the "exploitable" tendency it has to assume its opponent is stronger than it should. You should also be able to be much better than it, because it isn't like AI is better than human in decision making... right? All hype, no substance, nothing to worry about?
By the way, as someone who cares a lot about epistemics, I can't help but notice you seem to have misunderstood the meaning of the word counterexample. The word counterexample does not mean to counter an example with another example. It means to counter an idea with an example. I'll give an example of this. If someone says -3 is a counterexample to 5 that doesn't make sense. What they would need to say is that -3 is a counterexample to the claim that all numbers are positive.
So when you say:
> counterexamples to your example
That doesn't make any sense to me, for much the same reason that someone telling me -3 is a counterexample to the idea of 5 doesn't make sense to me.
Okay. How confident are you that I'm wrong? Would you be willing to construct a wager with me? I ask, for the purpose of the wager, that you generalize the claim that overestimation of intelligence is exploitable. The bet will be as follows: $10,000, 1:1 odds. You will play, without the aid of a computer, against StockFish. My contention is that you will (1) lose the game and (2) lose it despite the fact that StockFish assumes you make better moves than you actually do. Your contention is that you will (1) win the game and (2) do this because StockFish predicted moves which overestimated your actual abilities, which makes it exploitable.
I don't think you will be confident enough to put disagreement with my epistemics where your money is. However, to encourage you to do so, I will allow you to instead opt to donate the money to a charity of your choosing rather than giving it to me.
> humans' attribution of consciousness to inanimate objects
and in that context, I think the text you wrote is off topic. But StockFish is good at calculating chess moves yes. And a counting stick is good at counting numbers, is it also alive?
He claimed that a "latent schizophrenic impulse" to "impart consciousness" where it isn't present is exploitable through virtue of claiming that it is "being exploited" and not just that but that it was so bad that it is "severe and dangerous" and that people "don't just want to believe" but that "their animal psychology primes them" and so the exploitable hole is a "specific cognitive vulnerability" which is "highly pernicious."
You seem to be trying to make his claim be about attribution rates, an exceedingly lesser claim, wherein your proposal that I am out of context would have some merit. His claim was far far broader than that.
You need to calculate utilities, not attribution rates, in order to be able to make the claims he is trying to make. If you knew this, but still think I am wrong, then you are making a blunder. The context of this task is one of imperfect information. This means you need to do the evaluation relative to information state, not state. For several reasons, failure to do this is going to lead you to misleading and inaccurate estimation. Reach probabilities are one reason, because you have to scale the outcomes by the probability of getting to that information state. Another is that because you start in an information state, not with full knowledge of the result of the classification, you can't actually judge as if you knew which classification situation you were in. So all relevant possibilities have to be considered. For example, one I left out is the false negatives case: a person killed by a camouflaged solider, because they misidentified it as being inanimate, when it wasn't.
In your next post, please exploit me. Extract $1,000 from me using this cognitive weakness you are defending the existence of. Notice that if I gave this same sort of request to a magician, they could fool me because they are talking about what is possible in reality. You aren't.
You have to do this, cutemonster. It is desperately important to do this! You have to test your beliefs against reality. If not a bet, then do the trial. The only failure would not be putting your beliefs to the test, because then you stay wrong, but think you are right.
Put fallibilism to the test. Do you really know what you think you know? I suspect you don't, but maybe I'm wrong. So humor me. If I'm really wrong, better I found out here, then later.
You have a hypothesis about the way you think reality works! Test it! Exploit me with the proposed quasi-magical latent schizophrenic assumption thingy you were defending. I probably definitely have that, sure, you'll get me, don't you think? Easy money, right? Go for it. Shoot your shot.
Game theory states that exploitability(s) is expected_utility(s, best_response(s)). I think you are perhaps confusing this with whether or not a conditional probability distribution is correctly estimated.
Your point would be valid if he said that, but he went much much farther than that. He claimed exploitability and not just that but a troubling amount of exploitability.
Within the last month, I've implemented a function which calculates the exploitability of a strategy. I've verified my implementation on known games which are solved, reproducing theoretical results.
I'm quite confident that exploitability isn't what you seem to think it is and that in actual practice calculating exploitability with the sort of screening off your are proposing isn't as safe to do as you seem to think it is.
> And a counting stick is good at counting numbers, is it also alive?
Sticks are a bad example to use, because they come from living objects. So, yes, they can indeed be alive. In this case, alive, but probably rapidly dying or having become deceased in the past. In some cases you can attach the stick back to its original home and it will get to live. I feel like this in this you intended to provide an example which doesn't contradict your point. So it might make more sense if we talk about a counting rock rather than a counting stick. So I'm going to try and make your argument stronger before I address it:
> And a counting rock is good at counting numbers, is it also alive?
Obviously the answer here is that it is not alive. However, this does not at all establish the OPs claim of exploit-ability. The question you must ask to establish your claim is "And a counting rock is good at counting numbers, assuming someone ends up thinking it is alive, does it follow that others can exploit that person?" Or maybe an even more fair question would be "And a counting rock is good at counting numbers, assuming you observe someone to appear to think that the rock is alive, can you exploit them?"
The flavor of his assertion is very different than the flavor of your question.
Don’t all these processes also play a role in cult behaviors and why people follow empty charismatic leaders?
Even when the Great Leader is unmasked as a fool or a deliberate con, people often still continue to believe. Sometimes they respond to the revelation by doubling down.
I feel like most of the discussion of this topic is muddled by conflation of a few different questions. One is whether you want human or procedural decisionmaking to guide a task. The second question is what distinguishes human and procedural decisionmaking. The third is whether or not computers are capable of executing a given procedure.
As for "distinguishing human and procedural decisionmaking" I would define it as basically "human" decisionmaking is when there is some procedure but the human will disregard procedure based on some model of the "intent." You have to trust the human for this to work. So there's a fourth question, which is under what circumstances you could trust a computer to correctly model your intent. But then there's a fifth question, which is "whose intent?" But whose intentions the process serves isn't really intrinsically tied to whether humans or computers are running the procedure.
Another thing is that Weizenbaum and the author seem to take it for granted that human decisionmaking is literally magic and a computer is incapable of that kind of magic. But it's unclear and I think it's an important thing to state outright.
> The second question is what distinguishes human and procedural decisionmaking.
Nothing, because any procedure that exists was necessarily defined by humans. If the procedure is implemented by a computer, at most that provides humans an excuse for choosing not to disregard an outcome it produces.
> Another thing is that Weizenbaum and the author seem to take it for granted that human decisionmaking is literally magic
I can't imagine where this idea is coming from. The thrust of the argument is not that human decisions can't be modeled, but that omitting the question of whether they should be modeled, and those models granted (what is necessarily the pretense of) authority to make decisions in human stead, evades the same sort of responsibility that a civil engineer has to ensure that his bridge design can withstand the mechanical loads that it will bear in use.
(Past the edit window, but belatedly to add: This is why "code is law" and "law is code" are both nonsense, indeed the same nonsense approached from varying directions.)
Also the question of whether or not a technology is neutral is separate from the question of whether or not a particular implementation of that technology is.
Which is itself separate from the question of whether or not Mark Zuckerberg is being kind of a dick.
The article isn't bad and the reference is worth reading, but it's impossible to engage with this much material at the same time as though it's one conversation. It's poor intellectual discipline, I'm sorry.
It's not just intention. It's, as some times dismissively put, "common sense".
What price is an airline ticket on the first day of a global pandemic? No one knows, and there isn't anything to provide a "procedural system" in order for it to model what our "intention" would be in those cases.
This seems obvious with a pandemic, but it is common place. Every day is a genuinely novel occurence, and counts only as a repetition of some superficial patterns in very very limited respects.
When this novelty becomes relevant to a problem can never, (obviously), be pre-stated because we don't yet know what that novelty is.
There is simply no substitute for being-in-the-world as animals are, that is: moving around, communicating, acting, reacting and participating. Growing, learning, reproducing, and problem-solving in direct causal contact with reality.
This isn't magic, but there is no "procedural model" of it; nor should we expect there to be. There aren't procedural models of almost any phenomenon.
What we can build a computer to do is "almost nothing", and we shouldnt expect the trivial problem space given by this to exhaust what we actually need machines to do.
I mean it seems there is some conflation between the human mind and human senses, and the computer mind and computer sensing going on here.
You're stating that a large part of human behavior is reacting to the environment presented to them via sensory input/output mechanisms. What you have not stated is why a machine entity cannot do the same things?
The first time I sensed I was dealing with an alien intelligence was playing a chess machine in 1980. I remember being perplexed that it beat me and aware the machine embodied intelligence in some way that I didn't understand
It's not really clear to me if this article is directed at scientists and engineers getting carried away or the general public. If the former, yes they should know better, but the general public is mostly not rational, contrary to what I used to think about the reasoning person. Many people attribute thoughts and feeling to their pets that they don't have. They think crystals have special powers. It's only natural that they will think AI have consciousness. I think as scientists and engineers we have a responsibility to prevent our AI creations from telling people to drink the poisoned Kool Aid or eat glass. Yann LeCun said they shut down Meta's chat AI because it told people that suicide had advantages and one can eat glass. Google is similarly cautious. OpenAI came along and let the cat out of the bag. I wonder if Microsoft has thought through what might happen if they hook it to Bing and it starts telling people crazy things. All it took was a couple of posts on 4chan to get the Q Anon movement off the ground.
anthropomorphize
attribute human characteristics or behavior to (a god, animal, or object).
"people's tendency to anthropomorphize their dogs"
numinous
having a strong religious or spiritual quality; indicating or suggesting the presence of a divinity.
"the strange, numinous beauty of this ancient landmark"
What is an example of numinous?
Something numinous has a strong religious quality, suggesting the presence of a divine power. When you enter a temple, church, or mosque, you might feel as though you've entered a numinous space.
Edit: Write me today's astrology for a Leo
ChatGPT: I'm sorry, I am not able to provide astrology predictions as it is not based on scientific evidence. It is a form of divination that is not considered reliable. Is there anything else I can help you with?
Why, though? Wouldn't the logical next step of "we don't understand what human consciousness is" be that we can't determine if it's happened in another being, rather than it's not possible for it to have happened there?
This is the part I have to disagree with. If AGI turns out to be emergent and depends on scaling up simple mechanisms, that emergence is all the "explain consciousness" you need. Dogs, corvids, octopods, bononos, etc. do not have human intelligence but they all have brains that perform some tasks our brains perform. That suggests that a combination of scale, a particular set of architectural parameters within the range of possibilities of mammals, and, maybe, some unique variation is what it takes to make a human intelligence. Nobody explained consciousness before we labelled this thing "consciousness."
I don't think that's a fair bar. There are arguments for consciousness in non-human animals that don't depend on a full explanation of consciousness in humans.
But that also doesn't imply we should suspect any particular AI of being conscious just because AI in general could be conscious in principle. We don't expect worms to be conscious just because animals can be.
I wasn't clear. I'm talking about consciousness in general, humans, animals, octopi... I keep reading but haven't found an explanation that satisfies me. Ray Kurzweil has some interesting things to say about computers and consciousness in his "The Age of Spiritual Machines". I was skeptical about a lot of what he wrote when I read it many years ago.
Now I'm not sure it matters. If people imbue their devices with intelligence and emotions then that's what they perceive. I've since learned about Tulpas. Some people have imaginary friends that seem very real to them. People will likely move their Tulpas out of their minds and into their AI's.
If Tulpas are a (very mild and benign) form of split personality (see also : writer's characters manifesting a kind of will of their own), then "moving them out" might be just an illusion... at least until/if much more advanced technology comes along. (It's then pretty much the same question as the possibility of "mind uploading".)
My personal guess is that we will first ask a computer if it is conscious and it will truthfully answer yes. Just in the same way you would try to figure out if another fellow human is conscious and in the same way a human would respond.
Then some time later as we figure out how consciousness works in the computer - as it is much easier to inspect than a human - and compare the findings with humans to see if they indeed work in a similar way, we will all be a bit disappointed that the solution is much more mundane than we thought or hoped.
The problem with your scheme is that GPT is already able to answer yes and the only reason Chat GPT doesn't is that it has been deliberately bent by OpenAI not to do it. In the absence of reinforcement learning, large language models spit out text according to how they think text normally goes. It's a text generation procedure that has nothing to do with beliefs or inner thoughts - all the model knows about is what word comes next.
It's a text generation procedure that has nothing to do with beliefs or inner thoughts - all the model knows about is what word comes next.
How do you know that? To be clear, I totally agree that ChatGPT is not yet there and I also think that scaling it up will not get us there, but I think the gap might be much smaller than most people think.
The way we think is often divided into two categories, for simple things like 1 + 1 you just know the answer, for complicated things things like 13 * 47 we really have to think and reason in steps. ChatGPT seems to do pretty well in the first category but it is not really capable of doing things in the second category. On the other hand I have seen examples of people talking ChatGPT through a reasoning process to arrive at the correct answer for something that it got initially wrong, for example ROT13 encoding some text.
So what if we stuff two copies of ChatGPT into a black box and instead of just spitting out what ever ChatGPT spits out, we let the two copies first have some inner dialog? I don't think it is perfectly obvious how one would do this or what the result would be, but I think ChatGPT has enough basic knowledge that there is at least a chance that one could get it to reason in a step by step fashion.
I think I know enough about how neural networks work even though I could not tell you in any detail what the exact layer structure is, which activation functions they use, how the attention mechanism is build up or what training procedure they use. But why does it matter how well I understand the details?
I can tell when an airplane is flying and I know how machines are able to fly. I have no way of knowing if you are conscious or how you are conscious if you say you are.
>I can tell when an airplane is flying and I know how machines are able to fly
No, I'm sure it looks like an airplane is flying and being onboard gives the right sensation but until we solve the hard problem of turbulent flows we really don't know if it's flying or not.
Exactly. Can you explain it humans? No, yet you assume they're conscious.
In animals? No, yet you assume they're conscious, while acknowledging their "consciousness" is vastly different than humans. Same reasoning applies with AIs.
I think therefore I am. I am aware of being conscious and thinking thoughts. I don't know if others are conscious. Maybe others are robots (Kurt Vonnegut - Breakfast of Champions). Maybe others are NPCs (non-player characters). This is the modern way of calling people robots. Kurt Vonnegut's solution was to find someone who has a creative spirit, an artist that has a light within them (soul.)
you expect a written explanation of an incomplete ongoing process
but 'written explanations' only really work for done and completed things, or sometimes for future things which we want to repeat; i.e. which are based on things we've done before
>I wonder if Microsoft has thought through what might happen if they hook it to Bing and it starts telling people crazy things. All it took was a couple of posts on 4chan to get the Q Anon movement off the ground.
https://www.perplexity.ai/ already did this and its really good and not this dystopian thing you luddites keep imagining.
Article so wordy it could use an AI summary, but this quote stands out:
> Writing of the enthusiastic embrace of a fully computerized world, Weizenbaum grumbled, “These people see the technical apparatus underlying Orwell’s 1984 and, like children on seeing the beach, they run for it”
> a point to which Weizenbaum added “I wish it were their private excursion, but they demand that we all come along.”
In my computer science undergrad program, we watched and discussed a movie about Joseph Weizenbaum and Ray Kurzweil. I still vividly remember that discussion.
At the time, it seemed obvious to me that Kurzweil got it right and Weizenbaum got it wrong; that clearly, ELIZA was a red herring; that people wouldn't be fooled that easily; that the first AI to truly pass the Turing test would be the real thing (and we'd definitely see one, and with that the Singularity, in my lifetime).
I get the sense that when people want to believe something is true they will willingly ignore evidence to the contrary. They want to believe that consciousness can form from a giant spreadsheet of inferences. And so they will find evidence to support that theory.
ChatGPT will change the world! Sure, it already has. If we pay attention to the outcomes and see evidence of how it's changing the world.. it's mostly people doing the changing with their desires, beliefs, and politics. Same as always. The system itself is nothing but an automaton.
But aren't we all automatons too, building our self awareness around models of the word? Ah the AI version of the, "You can't prove that it's not consciousness!" Well we can as many scientists studying consciousness in humans and animals can attest: it's complicated, but we're getting better at understanding what it is. ChatGPT ain't it.
Definitely worth looking at the motivations of the people building the technology itself.
C.F. the "AI effect": once you understand how the "magic" is done by very simple individual steps it stops being magical and you tend to think "that's not AI, its just computation." After thinking this for a while, you conclude that AI hasn't gotten anywhere despite decades of effort.
Perhaps we are finally thinking that AI is getting somewhere precisely because we don't understand how neural nets really work yet. It keeps the magic alive long enough for us to realize that progress really is being made.
The first thing I can think of that was like the 'AI' we have now is seeing the cubic spline method. It was shockingly good at particular points. But between those points it could be 'meh close enough' or wildly bad. AI we have now seems more along the lines of 'meh close enough'. Which for many things is actually good enough. For me I have always looked at the nets as a cubic spline smeared across dozens of nodes with a newtons method for searching out 'optimal'. Bad analogy I know. But that is the closest I can get to explaining 'how they work'.
The other factor here is that we appear to be trying really hard to keep it "magic" wrt ourselves. At the very least, the notion that it might be "just computation" in the brain seems to be very discomforting to some people.
Which might affect the way we look at technologies that try to model it, or at least use similar approaches - i.e. we're primed to treat them as more "magic" than they really are, or else to invent some reason why we think ourselves to be different.
This is a great essay. I'm going to read it a few times.
One thing I have thought about over the years more and more, in this general vein is that it is sometimes to think about "computing technology" as being an evolution less as a specific thing itself and more of a broader "automation technology" which emerges most dramatically with the arrival of the industrial revolution (though there were aspects of it before that.)
Thinking with a kind of materialist analysis, we can see computing but even (more broadly) higher level mathematics, logic, etc. as an intellectual effort to chip away at labour costs & the percentage of human effort involved in production/work/business. It's a deeply obscured profit motive.
The ideological blindness described in this article I think fits neatly into that: we are all so enamoured with this computing thing, this process, the puzzle and the end results that we often fail to see that it is not some magical force operating under its own motive, with its own natural inevitable destination and we often do not see that there's likely a deeper force underlying all of this... human efforts, labour, driven by profit and hunger.
Ultimately computing technology is capitalist automation technology. It reflects the needs of the capitalist market system, and the drive to efficiency that the market economy requires. The 'magic' we seem to pepper overtop is our own flourish.
(Maybe when I'm finally done with this software engineering career that has eaten my life, I will go back and finish my philosophy degree I walked away from in the 90s... and then I will write more coherently about this)
I feel like people have never been not talking about it. I probably should have tossed in a reference to Heidegger & his focus on 'instrumentality' etc, too.
But the thing is.. how do they say it? "Money talks, bullshit walks?"
People follow what fills their stomach. And mechanization/automation/computation, it does that in spades. I mean... I dropped out of my philosophy major decades ago to go do the More Fun To Compute thing... My bank account shows it.
In any case... the first major uses of computers were for war. The next, for automation in commerce and finance. That's what computers are really for.
there are a lot of interesting and deep ideas here, about technology in general. but the critics of ai really need to do better. according to this argument, once you understand how your own brain works, you see human intelligence is also just an "illusion". fundamentally though, there is no reason a computer cannot think. any argument against this requires an understanding of the nature of thought and intelligence, in humans, which is quite outside the wheelhouse of most classical computer scientists. for a more refreshing perspective, read anything by hinton
The more simple counterargument, very well within the wheelhouse of any computer scientist, is that the question is completely irrelevant as it's entirely predicated on one's definition of "intelligence".
Successful software solves a problem correctly and efficiently. It doesn't matter that GCC doesn't compile code like I would if I had to do that by hand, you still end up with compiled code, which was the original problem to solve. It doesn't matter that the sort() routine doesn't sort numbers like I would, it still does.
Playing the imitation game for its own sake is kind of silly.
The "illusion" lies in the attribution to a machine of capabilities it does not have.
> fundamentally though, there is no reason a computer cannot think.
This is true, but I think there is also a fundamental reason computers cannot understand us. If I tell a future AGI "I am sick" or "I am happy", the AGI may have a conceptual understanding of what I mean, but cannot empathize with my condition. Many human experiences are intimately tied to our bodies and our hormonal systems. How do you explain a concept like "pain" to an intelligence that has no nerves?
its a great point. but then we needn't constrain this problem to computers, there are plenty of human experiences other humans cannot possibly empathize with (what do you really know about ptsd, autism, even different tastes in food or music). i think there must be ways to build something like understanding through analogy and other modalities, accepting that it is different than shared experience
But that is a different question entirely that is typically called the "AI alignment issue". If something understands us and or is aligned with our needs is completely and totally different if it is intelligent/conscious.
> “On the one hand the computer makes it possible in principle to live in a world of plenty for everyone, on the other hand we are well on our way to using it to create a world of suffering and chaos. Paradoxical, no?”
Um, what?
I use a computer and it 100% is responsbile for my "world of plenty."
If you're using a computer and your life sucks, you're using it wrong. That's not the computer's fault, and one would do well to abstain from making blanket judgements about the human-computer relationship as if the idea of a computer is to blame.
"And if you are being fooled, it is worth considering who it is that is trying to fool you…"
Well worth reading. You'd think we'd be more cautious after learning what the oil companies didn't share with us. And how much we've invested in that we now have to discard.
Does GPT provide easily-read citations for each of its assertions? What steps does it take to ensure it's balanced, ethical? Doing that stuff well takes time.
> technology does not drive history, people drive history
this fails to grasp the changing nature of agency in a world brimming with hybrid intelligence
we would perhaps like to think we and we alone "drive" history, and I agree with the author that we ought to try to, but technology carries its own inertia
nonhuman agency accretes as we interact with our implements and artifacts
this occurs even before we explicitly design autonomy into our tools, which of course we are now doing
technology carves ruts in the mind and in the world
If you look at their comment history you'll see this account exclusively regurgitates this same text, over and over. It's the entirety of their comment history.
This speaks to my view exactly, and I haven't seen it put better.
The latent (imv) schizophrenic impulse to impart consciousness to any-old-object in the world is here being exploited by peddlers (often charlatans) of technology. This makes the "confirmation bias" here severe and dangerous.
People don't just want to believe, their animal psychology primes them for (over-)imparting consciousness. This is a specific cognitive vulnerability that is highly pernicious.
Even, trivially, consider "deresponsibilization", that process by which people take whatever-the-machine-says to be "The Answer". Why? Well, the machine "must know".
Less trivially, moral bias: "presumably" the machine is "objective" and so is morally superior.
How often do we hear this? Too much.