Even when no direct claims about consciousness are being made, that is not stopping people from behaving as if they are conscious programs.
I think we're seeing this playing out in discussions about generative AI with people justifying certain behaviors regarding the use of artist's data in training models by comparing the program to a human going through the process of "learning" and "gaining inspiration" from the work of others. Some people seem fully convinced that what these programs are doing is equivalent to human behavior to a degree that would qualify the program to receive human-like considerations, i.e. treating the software like an entity instead of just a program/tool to be wielded by the end-user.
I think this is one of the most pernicious issues, and people seem to have a hard time recognizing the trap they've fallen into.
I think this is also an outcome of this software being sufficiently complex to be too hard to understand without investing time and effort in understanding it.
I think this results in the mis-attribution of intelligence when instead it's just a really clever piece of software. But the casual commenter cannot estimate the gap between "pretty clever" and "so complex that it's actual intelligence".
But I think in many ways you're falling into the same trap...
So, let me set the first trap for you. Give me a logically/mathematically defined definition of intelligence and one of consciousness. Unfortunately after decades of back and forth on this we typically end up with answers that fall out of science and into the "I'll know it when I see it territory".
If you are going to describe machine intelligence/consciousness your definition must be able to cover intelligent behaviors we see from single cell life to the massive complexity we see in humans. Attempting to handwave and say "it's not as complex as humans therefore it's not intelligent/conscious" is a complete and total failure from my point of view.
If I'm falling into a trap, it certainly cannot be the same one.
How is your trap functioning? Are you saying that because we cannot precisely define consciousness, we cannot make any conclusions about it at all? Or that conversely, we should assume everything is conscious?
The scientific community continues to operate without certainty in many major areas that are deeply consequential, but that does not prevent us from exploring the problem space with the tools we do have.
There is plenty that we do know about the subjective experience of consciousness in biological creatures, human and otherwise. For it to even matter that a computer might be conscious is a construction of our subjective reality and our intuitions about why it is meaningful that something is conscious.
We've studied the relative complexity of thousands of species and understand enough to know that some species are a lot closer to humans than others.
But all of that is a giant deviation, and I'd argue has no bearing on the core point: the entire notion of copyright and the legal system it is built on are deeply, intrinsically, inherently human, and originate from the framework of human subjective consciousness, individual and collective. If the fact that the software is AI has any bearing on whether or not the unlicensed use of artist's content is acceptable or not, it must imply some elevated status of the software above ordinary software. It is that elevation that must be explained, and the explanations thus far have all been some form of "it's learning like a human".
Even if we were talking about a fully conscious AGI right now, we'd still need to have a conversation about what its consciousness means, and in what ways it is or is not compatible with human consciousness. Before that, we'd need to have a conversation about the ethics of commanding conscious AIs to do our bidding, but I digress.
We know not all consciousness is the same because we know to avoid grizzly bears.
Unless you're making an argument for Panpsychism, in which case this is an entirely different conversation :)
I think you are misreading that argument. If I understand correctly the point is that you are already allowed to look at and make derivative works of art - the machine version of that is not fundamentally any different, especially since it is not reproducing works in whole but rather reproducing a 'style'.
That isnt an endorsement of the argument, my point is that you dont have to believe the black box has any independent intelligence to draw an analogy between what it does and what we already allow.
I understand their argument, but I'm arguing that their argument must fundamentally imply some form of underlying consciousness, or at least some before-now-nonexistent property that elevates it above an ordinary computer program and makes it compatible with a reading/interpretation of the law independent of the material differences between the AI program and a human.
Setting aside the bandwidth and compute issues for a moment, if Stable Diffusion was a tool that when prompted, downloaded and ingested 2.4 billion images (regardless of license), ran some really complex algorithms on them, and then spit out a derivative result - in other words, take out the AI - I think people would view the tool very differently.
At some point along the way, it seems people jump to a belief that because of <some component / step in the process>, this is no longer just a computer program that scraped the entire Internet without asking.
> Some people seem fully convinced that what these programs are doing is equivalent to human behavior to a degree that would qualify the program to receive human-like considerations, i.e. treating the software like an entity instead of just a program/tool to be wielded by the end-user.
In recent threads about how the legal system will interpret what Stable Diffusion is doing from a copyright perspective, multiple commenters were making the argument that the system is no different than a human learning and drawing inspiration from artwork.
Some went further to claim that we don't know enough about consciousness to make any judgements about the nature of this software. As if somehow, this lack of knowledge implies the system must be conscious by default. It's a weird line of argument, but points to how strongly people feel pulled to confer consciousness on things they do not understand.
Maybe the panpsychists have it right, but that's an argument to be had at a much broader level and unrelated to AI, but that seems akin to living your life according to Pascal's wager. And in the case of AI, the cost of treating it with this reference could be great, and the potential for abuse greater.
I think we're seeing this playing out in discussions about generative AI with people justifying certain behaviors regarding the use of artist's data in training models by comparing the program to a human going through the process of "learning" and "gaining inspiration" from the work of others. Some people seem fully convinced that what these programs are doing is equivalent to human behavior to a degree that would qualify the program to receive human-like considerations, i.e. treating the software like an entity instead of just a program/tool to be wielded by the end-user.
I think this is one of the most pernicious issues, and people seem to have a hard time recognizing the trap they've fallen into.
I think this is also an outcome of this software being sufficiently complex to be too hard to understand without investing time and effort in understanding it.
I think this results in the mis-attribution of intelligence when instead it's just a really clever piece of software. But the casual commenter cannot estimate the gap between "pretty clever" and "so complex that it's actual intelligence".