Hacker Newsnew | past | comments | ask | show | jobs | submit | bee_rider's commentslogin

I have basically the same experience of trying DuckDuckGo, getting useless results if the search is outside the domain of the handful of sites I already know, and then trying Google. But I find Google usually also returns useless sites.

ChatGPT is the only general-purpose search engine that seems to have any chance of producing a link that is both new to me and useful. Of course, I try not to use it too much, people say it’s bad for the planet or whatever.


Processing power is my second favorite explanation.

My first favorite would have been: they don’t use the humans for anything, the pods are just the most efficient way to store humans. The machines think they are being benevolent, just want peace and quiet and for humans to stop doing dramatic things like scorching the sky. But I don’t know where the plot would go from there.


There is backstory that the films could have gone into, though I don't know if it was written before or after the first film. The humans in the matrix were allied with the machines and they put them in the matrix to protect them from the war. They were being benevolent.

They benevolently feed the dead to the living

I think we the urban legend really sticks around because the compute explanation just makes much more sense and we all want this beloved movies not to have a sill (albeit inconsequential) plot hole.

Oh, totally, it’s my head canon as well.

Mine is either that, or, the idea I mentioned in this post:

https://news.ycombinator.com/item?id=47185076

Machines trying to be benevolent, but overly controlling.


That's very good.

Probably the idea is broad enough to get away with borrowing it or putting their own spin on the general idea (I mean, it is expected that stores will influence each other and ideas will spread). I’d rather guess that a studio executive thought the battery idea would be more understandable to people (if that is the case though, I think they were dramatically wrong, the computing idea makes much more sense and I think all of us in the audience would have been fine with it).

Remember that all critiques of Hollywood require you to think like you’ve just consumed a massive line of cocaine. Because that is how they think and live. So, empathy reduced to zero, all your ideas are great, everything else is dumb, etc. Making decisions under the influence of strong narcotics is a recipe for idiocy.

Source: me, I had a huge cocaine problem and worked many years in the tech side of music and movies


Maybe it could just measure the number of tokens for the examples (and then summarize what the examples show, under the assumption that that’s the actual functionality of the project). I’m 90% joking… but that last 10% makes me wonder…

Other comments indicate that it’s just a free trial that converts to paid at the end. So, don’t worry, you are just excluded from an ad basically.

I wish more ads offered me $1200 of usage followed by the option to either pay to keep using the product or just stop at no cost.

It converts back to paid automatically if you had an existing paid subscription before. No other cases. In any case, this is still a valuable service they are providing for 6mo for free, which many will appreciate even if the goal is to recruit more users.

1060 6GB here. Figured the headroom would get me a couple extra years out of it. At this rate I’m wondering if the card is going to outlast the concept of owning graphics cards. Partly because, as you mention, maybe NVIDIA will stop selling them. Partly because, maybe APUs will get good enough…

About 3 years ago I got an RX 6750 XT with 12 GB of VRAM for $330 and I expect to be using that until either it dies or my computer's RAM dies and I don't have $10,000 to replace it. If only I'd maxed out all my DDR4 slots when DDR5 was the hot new thing and you could get it for cheap.

Strix Halo is already good enough. It's a premium product, though.

Strix Halo looks quite good. Hoping the stars will align, and my GPU will hold on long enough for the RAM famine to end and some Strix Halo successor to come out.

The AI bot wouldn’t be representing you any more than your text editor would be. You would be using an AI bot to create a lot of text.

An AI bot can’t be held accountable, so isn’t able to be a responsibility-absorbing entity. The responsibility automatically falls through to the person running it.


True. But it can help me create a lot of useful text so I can represent my self better.

I do wonder what happens when everyone is using agents for this, though. If AI produces the text and AI also reads the text, then do we even need the intermediary at all?


> do wonder what happens when everyone is using agents for this, though.

The company is going to use AI agents to read and respond too. Some botocalypse is going to happen at some point.


> Some botocalypse is going to happen at some point.

Yeah the bots can duke it out. As long as my time is saved.

For me the main concern is, before I have a stash of millions of dollars saved up, my medical expenses need to be paid for by the system, because I can't afford surprise bills. Hopefully the bots can fight more on my side in the near future.

Hopefully in the far future when the botocalypse happens I'll have saved up enough that insurance evading payment of $5500 won't be an issue for me, and/or I'll be of retirement age, don't need job opportunities anymore, and can go live in a country with better healthcare.

Call me selfish, but I don't control the insurance/medical system, I don't have space to think about more than protecting myself from it.


> I do wonder what happens when everyone is using agents for this, though.

Unless one is very cavalier with one's definition of "everyone", this is not going to happen.

There will always be a very significant cohort of people who are emphatically uninterested in replacing their own judgement and composition skills with an Averages Machine.


The bot doesn't need to be held accountable. It only needs to spew out the right text that triggers humans to rightfully transfer accountability from me to the insurance company.

> We need to know if the email being sent by an agent is supposed to be sent and if an agent is actually supposed to be making that transaction on my behalf. etc

Isn’t this the whole point of the Claw experiment? They gave the LLMs permission to send emails on their behalf.

LLMs can not be responsibility-bearing structures, because they are impossible to actually hold accountable. The responsibility must fall through to the user because there is no other sentient entity to absorb it.

The email was supposed to be sent because the user created it on purpose (via a very convoluted process but one they kicked off intentionally).


I'm not too sure what you're asking, but that last part, I think, is very key to the eventual delegation.

Where we can verify the lineage of the user's intent originally captured and validated throughout the execution process - eventually used as an authorization mechanism.

Google has a good thought model around this for payments (see verifiable mandates): https://cloud.google.com/blog/products/ai-machine-learning/a...


I see a lot of discussion on that page about APIs and sign offs, but the real sign-off is installing anything on your computer, and then doing things.

The liability is yours.

Claude messes up? So sad, too bad, you pay.

That's where the liability need sit.

And one point on this is, every act of vibe coding is a lawsuit waiting to happen. But even every act by a company is too.

An example is therac-25:

https://en.wikipedia.org/wiki/Therac-25

Vibe coding is still coding. You're giving instructions on program flow, logic, etc. My rant here is, I feel people think that if the code is bad, it's someone else's fault.

But is it?


It was more of a rhetorical question.

Anyway, that payment system looks sort of interesting. It seems to have buy-in from some of the payment vendors, so it might become a real thing.

But, you can give a claw agent your credit card number and have it go through the typical human-facing shop fronts, impersonating you the whole time and never actually identifying itself as a model. If you’ve given it the accounts and passwords that let it do that, it should be possible to use the LLM to perform the transaction and buy something. It can just click all the buttons and input the numbers that humans do. What is the vendor going to do, disable the human-facing shopfront?


Im not a fan of the payment use case & agree with your take, just a fan of the cryptographically verifiable mandate used throughout the process.

Apparently AWS sovereign cloud is designed to continue operating even if the US offices cut them off. The servers are in the EU and the people running them are subject to EU laws, not US ones.

Realistically a US executive could be legally required to give an EU engineer a command that they legally couldn’t follow. At that point I guess we find out if the engineers’ national or corporate identities are dominant. I suspect the former in most cases, but who knows?


The US exec probably doesn't want to order them either. So the game would be played and they did their best. There's another article about the US fighting data sovereignty requirements/laws in other countries, but that relies on their quickly dwindling soft power.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: