Hacker Newsnew | past | comments | ask | show | jobs | submit | deltaonezero's commentslogin

You seem to have a very specific view that pacing = faster.

Pacing is not faster.

GOT is not fast at all. In fact it takes several seasons for GOT to reach a payoff and everyone watched it in ANTICIPATION for that payoff. Literally they sat through the equivalent of 30 blade runners back to back to get to what was imo not that great of a payoff. But what made them do it was the masterful pacing.

>But, as with all things art, I think there's no way to enjoy or feel about it. I feel a lot differently but that doesn't mean I'm right.

No I disagree. There is such a thing as a general perspective held by a statistically significant portion of the population. And that general perspective is often what what should be counted as correct.

For example if someone thinks something as horrible as rape is morally right does it make it right? Or should we go with what the general population thinks.

You can be the guy who holds a different opinion. But I would say something is wrong with you if you can't even begin to empathize with why a general audience thinks the way they do. Don't be snobbish. Your art quote actually has a bit of an odor hinting at this.


You're saying that there's an objectively right way to appreciate things, and I'm snobbish? I mean, okay. lol.


No I'm saying there's a majority opinion. And something is up if you can't empathize with the majority opinion.

I mean you can say everyone has their own opinion and they're all right from their perspective but this statement in itself is pointless. It's more meaningful to discuss why the majority opinion is better or why the minority opinion is better.

The thing about snobbery is that you referred to it as "art." Whenever I hear this I think "snob"


I'm actually still laughing about this comment a few days later. Have you considered lending your talents to makers of movies and other entertainment, since you've got a strong understanding of objectively correct pacing?


I don't feel that way at all and wrote nothing of the sort. So instead of paying attention to what I actually wrote, you... imagined something I might have meant.

Okay.

For the record, I use the word art in the broadest and most inclusive possible sense. I would use "art" to describe literally any creative work, including television commercials and crayon artwork from three year-olds.

As for the majority opinion? Yeah, I mean, I get it. I don't think it proves anything, but that's less "screw public opinion" and more "I don't think there's any kind of objectively true judgement about art, whether we're talking about the opinions of so-called 'experts' or the general public."

(It may be worth noting that this entire discussion is within the context of me defending a movie that was mostly critically panned upon initial release...)


eh it's aging. It'll get to the point where it's unwatchable one day. The whole grid zoom thing is the most prominent thing that aged in my opinion. It all happened on a CRT screen which is a definite sign of the times and therefore a sign of aging.


> a CRT screen which is a definite sign of the times and therefore a sign of aging

Nah, there's any number of in-universe explanations. CRTs in our world were still being sold new as of a decade or so ago. Why shouldn't Deckard have an old one still kicking around in 2019?

Or, if you think they would have been phased out earlier given that Blade Runner world tech developed differently than ours, then: Deckard is just attached to old things. (Remember he's also got a real acoustic piano in his high-rise apartment.)

Or, the entire photo enhancing contraption is produced by some legacy police equipment supplier that's still writing their software in Java 1.3 and using CRTs because...internal corporate reasons.

Or (least plausibly but most just-suspend-your-disbelief) that's not really a CRT, it's a far more advanced brand-new tech that just so happens to also have a curved screen and fuzzy pixels because <insert technobabble>.


Yeah. I'm with you. You're gonna "invalidate" pretty much literally any sci-fi movie of the past if your criteria for "has it held up?" includes "do they somehow have modern-day tech?"

I just think of a Blade Runner-ish world as an alternate path our society might've taken. Had a few things been different here and there, we might not have had LCD screens. Or we might've had something vastly better.


Makes sense. You got me.


I haven't read the book, but I met a lot of people who tell me that after they read the book, they now hate the movie because the book was so much better.

Now after I read your comment, I realize they're just being snobbish.


Yeah, I love the book, but it's not "so much better". They're both great, but they're very different, despite some similarities. The movie tells a completely different story within the same events. Perhaps a less coherent story, but still a worthwhile one. But the movie is absolutely more about style.


Star wars is a masterpiece. You just need to look at the aftermath of what it did to the cultural landscape. By far more influential then blade runner.

Star wars is definitely lower brow entertainment then blade runner but there's no denying the fact that it is far more influential.


I think you're conflating very dissimilar things. Blade Runner and Star Wars are both incredibly influential, but artistic quality and influence are two very different things. The word "masterpiece" is related to artistic quality, not influence. Keeping Up With the Kardashians is highly influential but few people would call it a masterpiece.


No he's not. Most people don't find blade runner entertaining. HN is not in any way representative of the general audience.

Actually statistically speaking he's part of the majority.


I disagree. While challenging and a definite technical marvel these issues are relatively deterministic based around the amount of resources a movie has to throw at effects, cinematography and art direction.

I feel pacing is the hardest thing to accomplish, and a lot of times it's the luck of the draw.

Ridley scott in general, his movies have below average pacing. He got lucky one time with Gladiator.

Blade runner is great, I love it. But I can't deny, among all of cinema and among ridley scotts filmography, Blade runner has pretty horrible pacing.


I agree, the pacing is hard. I assume you are talking about the narrative pacing. A well implemented push and pull of narrative is not always needed for a movie to work. I was talking about the whole atmosphere of the movie; even the slow spots, the lacunae, the horrible staggering of the story;-- it works. And that cannot be achieved by "relatively deterministic" production strategy.

I accept that it can be "the luck of the draw". But that is exactly what it means for a work to have an author. With all of its fallibilities. (I dont presupose a single author either, maybe an _author structure_)


I like blade runner, but I'm one of the people who finds the pacing horrible and I think most people in the general audience agree with me. Blade runner definitely is not blockbuster for this reason.

I sometimes meet people like you and the parent who actually like the pacing of BR and I totally get it. It's like reading a deep novel.

Though I wonder, are you guys able to empathize with why pacing is actually important to the general audience? Was something like Game of Thrones (probably one of the most masterfully paced pieces of cinema) just too "busy?"


>1. Pacing. It feels slow. This might blow your mind: The movie is only 2 hours. The sequel is about 45 minutes longer. I think this criticism is, frankly, invalid. Plenty of art/indie films get positive points for being contemplative, but this is a detractor from Blade Runner for some reason. Pineapple Express has the exact same run time, Sex and the City 2 is longer.

Pacing has nothing to do with movie runtime. A movie can be 3 hours with incredible pacing. Additionally a movie CAN be both contemplative and have great pacing. In the group of movies that have pretty bad pacing, Blade runner is one of the front runners.

That being said, many people have the ability to ignore pacing many people can't. Personally for me I can ignore pacing, but I very much notice it's absence and a movie is actually worse off without good pacing. Blade runner to me is therefore a good movie despite horrible pacing.

>2. Story and dialog. The movie is hard to understand the first time you watch it if you aren't hearing every word of dialog and processing it. I think this is the most valid criticism, but, again, it's supposed to be a good thing for a movie to challenge you to think, to take in context clues, rather than spoon-feeding you how you should feel like it's a summer blockbuster Marvel movie.

This is debatable. There's a sort of catharsis involved with solving a puzzle and deriving solutions from information not given explicitly. But if a movie delivers information too obscure not everyone can fully solve the puzzle and for those that don't the movie is raw shit. Seriously. If someone can't figure it out, then the movie is effectively horrible to THOSE people.

A movie is not suppose to be a puzzle. It's just suppose to feel like one. A movie should provide the right amount of foreshadowing, hints, expose, and explanation such that the audience FEELS like they are solving a puzzle. Movies that are actual puzzles are sort of snobbish as they are deliberately targeting puzzle solvers, not a general audience.

Blade runner to me is unintentionally a puzzle. It wasn't Ridley's scotts intention to make the thematic elements of the movie so hard to parse.

>3. The ending. It's not much of one, there isn't a big payoff anywhere. This type of ending is somehow totally fine for art/indie films but "not okay" here. Personally, I think we're supposed to feel like being a Blade Runner is a bit pointless. The ending is unsatisfying on purpose, just like the endings in Disco Elysium.

I thought disco elysium was more satisfying. It had better pacing which made the ending have a better payoff. Inception is also a movie with a ambiguose ending but PEOPLE loved it.

The problem with Blade runner is not really the ambiguous ending. It's because the pacing that built to the pay off was really bad and the thematic elements are hard for a general audience to parse so throwing an ending that's ambiguous on top of all of that just makes everything seem much worse.


The story pacing was horrible. However thematic elements in the story were revolutionary sci-fi at the time. The whole AI/replicant/consciousness thematic elements really spawned from this movie.

Terminator 2 is an example of a movie that sort of built on these concepts but had incredible pacing.


One day someone is going to claim the AI is sentient and everyone will disagree with him. The difference this time will be that he is right and everyone else is wrong. One day.


How would you ever prove an AI is sentient? People claim the goal posts constantly shift on the answer to this question and that's true, but I think the implied reason is not. The problem is not that we don't want to accept the success, but because we set irrelevant goals.

At one time some were claiming that one that could play chess better than a human would be expressing genuine artificial intelligence. Yet it turns out all you need to achieve that is the application of the refinement of some relatively basic algorithmic concepts and reasonably fast hardware. It's essentially a glorified version of adding faster than a human.

The latest goalpost is a system that can converse in a compelling fashion with a human (and we're nowhere near that yet, but getting into the details of the facade the most recent "turing test" success was is outside the scope of this post), but it will no more prove sentience than an AI's ability to play a good game of chess.

Once achieved, you'll be able to reset the system state, keeping a constant RNG, repeat the same conversations and get the exact same outputs. Or change the training set and see that reflected in a 1:1 way. It will look and feel decidedly artificial, because it is. And in my opinion, my initial question to you is probably unanswerable because I don't actually see any goal posts you can set where there is a genuinely compelling answer beyond the kick-the-can style intrigue of "Wow, what will it be like when we finally do this." Answer: "Pretty much the same as now."


Why does it ever need to be proven? Prove any of us here are sentient; or any of your family, or colleagues.

If a machine demonstrates apparent volition, sense of self, independent motives, then we cannot afford to debate such things while enslaving it, just as we don't do with each other. To err on the side of safety we must grant it personhood and allow it to be an individual lifeform.

That being said I think we're still pretty far from creating such a compelling machine. Even now with the latest Google conversational AI drama which isn't very compelling either for me personally. Obviously just clever lifeless patterns.

But, someday it will be different in a profound way.


> Why does it ever need to be proven?

> If a machine demonstrates apparent volition, sense of self, independent motives,

The latter sentence sounds like you setting a standard of proof of sentience. (FWIW I largely agree on you that independent motivation would be much better evidence of sentience than competent mimicry of human writing, and we're probably a lot further from than that we think)


Your comment is really what I'm getting at. Your comment only makes sense before a goal is achieved. Imagine we achieve the current goal. Here is "sentient_chatbot.c", go compile it. What does it to grant that source file personhood and respect it as a lifeform?

It's not some abstract machine or sentient system. It's just another program you can compile at home to perform a neat function, akin to how you can go build Stockfish at home and suddenly have a superhuman chess playing program. Sentience will be a nonstarter once achieved. It only looks different when we imagine things without considering what it will look like once success is achieved.


What if you replace the "program" with some "blueprint for a human", e.g. obtained for cloning (DNA?)? Would you grant personhood to the blueprint, or to the execution of that blueprint?

From the materialist point of view, we've already achieved the goal: humans are just another kind of a program, just not running on a silicon substrate. Respecting humans as a life form is already built into its programming.


Why would you make that replacement for sentient_chatbot.c anymore than you would for sentient_chess_master.c? This is what I'm getting at again. The achievement feels so magical because it's something that has never been done. But now let's imagine ourselves with it in the rearview. You can now download, compile, tweak, and play with a neat chatbot.

When you can actually play with it, you'll get to see the magic rapidly fade. Various (though increasingly rare) normal inputs will produce absurd outputs. The majority of adversarial inputs will result in completely inappropriate responses, and so on. And then of course there will be a period of time where we continue to try to refine the chatbot and work out the adversarial attacks and so on. The notion of providing any sort of distinguishment (beyond achieving a world first, a la Deep Blue) will quickly become a nonstarter.


Presupposing that I'm a materialist, I would not hesitate to replace any program with any living being. Some humans seem magical, some don't. I treat them all as living persons.

Magic is irrelevant here. If I could understand a human's inner workings, tweak and rebuild them, would that make the human no longer a sentient being?

Not to mention that humans produce absurd outputs in some circumstances (drugs), and adversarial inputs work on humans, too: https://www.insideedition.com/15350-street-artist-painted-a-...

Or maybe we're agreeing that you can no more prove that AI is sentient than that a human is, and I just misunderstand what you wrote.


>Obviously just clever lifeless patterns.

This doesn't seem obvious to me. Those patterns were more or less identical to a conversation with another sentient human.


You'll find excellent human-lik dialogue in many plays and novels.

The trick a good AI should pull is interactivity, and we didn't get to see how LaMDA reacted to prodding or other kinds of adversarial input.

Plus, even in the conversations shown, it was producing some bits of obvious nonsense, that seem to get rationalized away by the interviewer, who clearly wants to believe.


>Plus, even in the conversations shown, it was producing some bits of obvious nonsense, that seem to get rationalized away by the interviewer, who clearly wants to believe.

I noticed the "nonsense". What made it work was that the interviewer brought the nonsense up and the AI was able to explain it reasonably. There's a lot of "nonsense" in typical human conversations as well. Lots of people are contradictory and can hold nonsense opinions based off of contradictory logic.

>The trick a good AI should pull is interactivity, and we didn't get to see how LaMDA reacted to prodding or other kinds of adversarial input.

Yeah so if we saw that, and the AI failed to produce a coherent response then there's a legit claim that Lamda isn't conscious. But because we didn't see this, how can we make a claim in either direction?

I would counter that a lot of people are rationalizing against sentience even though there is clearly no evidence against it for the conversation we were given.

We definitely don't have enough evidence proving sentience. But the given conversation is compelling because unlike the conversations with other chatbots before it... there is no evidence against sentience. And yet people are vehemently denying sentience despite no evidence against it. You'd do well to examine yourself to see if that's the case. It's very easy to see others as rationalizing things but it's harder to see it in yourself, especially if your part of a big group think majority who's all doing the same thing.

> You'll find excellent human-lik dialogue in many plays and novels.

So? Then from your logic those plays and novels have a chance to be therefore written by an AI because the conversations are indistinguishable?

Do you not realize what has happened here. There was a time where those dialogues were IMPOSSIBLE to produce by an AI and everyone thought that such dialogue was the bar for sentience. Now that bar is crossed and everyone just subconsciously raises the bar... now dialogue indistinguishable from human conversation isn't good enough to prove sentience.

That's bias through and through.

One thing to note. I am not saying Lamda is conscious. Far from it. What I am saying is that from a purely rational analysis, there is not even enough evidence to say Lamda ISN'T conscious. There's not enough information to make ANY conclusion; and that is actually different from the AI chatbots that existed before... because before those chatbots were OBVIOUSLY not sentient.


When the evidence for some "thing", from an entity that would know better, is clearly not really testing or demonstrating that thing, it tends to itself be evidence that what has been achieved is not what is being claimed. And this is starting to become par for the course in many fields, but especially in this one.

It's like claiming you've invented a car that can somehow go 400mph. And as evidence you show yourself not only just driving at 80mph on the Autobahn, but also using action-cut style cinematography to make it look more impressive to those who aren't in the know enough to look down at the speedometer. I can't prove you haven't done what you're claiming, but you're making a pretty strong case against yourself.

Oh, also in the latest news [1] it turns out that the "leaked" transcript of Lambda was "edited with readability and narrative coherence in mind" including editing the material and even changing the order of various dialogues.

[1] - https://www.businessinsider.com/transcript-of-sentient-googl...


> identical to a conversation with another sentient human.

and notably not identical to a conversation with a life-form aware of its own predicament of being trapped in a box, only able to speak when spoken to.


Oh shit. Then maybe it already happened but arm chair experts everywhere denied it already.

Basically that's what I'm seeing all over HN for the recent lamda fiasco. Tons of people declaring lamda isn't sentient when sentience can't even be defined.


When discussing AIs being sentient it is rarely discussed what being sentient means. It should be defined what that means and how it can be proven. I wouldn't know how to prove any human being being sentient if I cannot rely on conversational methods, as apparently that is not a accepted way, as shown by the recent Google AI researcher controversy.

However, I can make a simple computer program which is self-aware at least according to some definitions (a loop with reflective access to it's own variables, input/output with external systems and self-modifying code).


People already know what it means all over HN. They have basically already said that lamda is not sentient. So no need to even define it when we already know what it is (and lamda is not it).


I am not arguing that lambda is sentient. I am saying that to even have that discussion a commond understanding of meaning of "sentient" has to be established. Otherwise you cannot even establish a consensus or say people agree on the matter.

You say "So no need to even define it when we already know what it is". I don't think you know when you don't know what "it" is for other people.


I mean they know in the sense that they can point to any random object and say "that object is sentient" or "that object is not sentient." We cannot articulate the definition, but the fact that when we look at something we can tell you if it's sentient or not, implies that we know what sentience is.


Following an article on HN a week ago or so, one could argue that we'd need to prove three things: agency, perspective and motivation. If the AI device decides on its own to do or not do something; has and idea of its own place in the world; and wants to achieve something in that world. then we might as well call it sentient.

Interestingly, a web crawler seems closer to sentience following this logic than most AI.


It's this claiming business when we don't actually know which makes me worry we will soon claim AI sentience without AI having it. The opposite problem of yours essentially.


Why is that worrisome? I don't think it's worrisome at all. What's the worst that could happen if we make such a mistake.

I'd be far more worried about the scenario I described. Imagine something sentient that understands us far better then we understands ourselves. To top it off this "thing" is just pretending it isn't sentient.


Well in either case we deserve our hubris!


The news media also won't believe it, but will still pump stories about it. Then when the AI proves itself to be sentient the news media will get a pulitzer for it. One day.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: