Doing concurrency in Rust was more complex (though not overly so) than doing it in Golang was, but the fact that the compiler will outright not let me pass mutable refs to each thread does make me feel more comfortable about doing so at all.
Meanwhile I copy-pasted a Python async TaskGroup example from the docs and still found that, despite using a TaskGroup which is specifically designed to await every task and only return once all are done, it returned the instant theloop was completed and tasks were created and then the program exited without having done any of the work.
The difference there, in Debian's case at least, is that there is a distinction between the frontend and the configuration backend; you're probably most familiar with the `newt` frontend, but there's also `text` (for textual entry without using curses or anything), `noninteractive` (for just use the defaults), gnome, kde, teletype, or even 'web' which does not seem to work effectively but is a neat idea regardless.
TUIs which are just TUI views of data you can get otherwise are fine; TUIs which are the only way to interact with something... less so.
I like TUIs, but if given the choice between a TUI or a CLI program, I'd take the latter. You can always create your own interface from it. Better if it's backed by a library.
Look at those libtard euros! They'll put up with anything their government tells them to - mandated vacation time, sick days, health care, work-life balance. But not me, I'm a FREE THINKER. I have RIGHTS, like the RIGHT to get fired out of nowhere for no reason, or the RIGHT to lose my health insurance if I lose my job. Thank god there are no AUTHORITARIANS here in AMERICA where people are FREE to get SHOT IN THE STREET for DRIVING THEIR CARS or TAKING A PICTURE or BEARING ARMS WHICH IS A CONSTITUTIONAL RIGHT BUT THAT ONE GUY DID IT AND DESERVED TO GET MURDERED THIS ONE TIME.
Just a slight correction in case anyone is reading this who loses their job:
In most states, if you lose your job your income is now $0 and you will be eligible for Medicaid and free medical care. Immediately go apply and see a social worker if you lose your job - it could be life saving!
Really sorry in advance, but I thought this whole HN thread could use a bit of positivity. I turned your satire into a mad-lib and asked AI to fill it in in a happy way.
But not me, I’m a dreamer. I have gifts, like the courage to kindle hope, or the patience to lose
track of time if I am laughing with friends. Thank god there are no
frowns here in this sun-drenched park where people are gathering to get
together for picnics or music or stargazing.
> An AI agent like this that requires constant vigilance from its human operator is too flawed to use.
So people shouldn't be using it then.
The people who built the AI agent system built a tool. If you get that tool, start it up, and let it run amok causing problems, then that's on you. You can't say "well it's the bot writer's fault" - you should know what these things can do before you use them and allow them to act out on the internet on your behalf. If you don't educate yourself on it and it causes problems, that's on you; if you do and you do it anyway and it causes problems, that's also on you.
This reminds me too much of the classic 'disruption' argument, e.g. Uber 'look, if we followed the laws and paid our people fairly we couldn't provide this service to everyone!' - great, then don't. Don't use 'but I wanna' as an excuse.
How Openclaw works is wildly irrelevant. The facts are that there is a human out there who did something to configure some AI bot in such a way that it could, and did, publish a hit piece on someone. That human is, therefore, responsible for that hit piece - not the AI bot, the person.
There's no level of abstraction here that removes culpability from humans; you can say "Oops, I didn't know it would do that", but you can't say "it's nothing to do with me, it was the bot that did it!" - and that's how too many people are talking about it.
So yeah, if you're leaving a bot running somewhere, configured in such a way that it can do damage to something, and it does, then that's on you. If you don't want to risk that responsibility then don't run the bot, or lock it down more so it can't go causing problems.
I don't buy the "well if I don't give it free reign to do anything and leave it unmonitored then I can't use it for what I want" - then great, the answer is that you can't use it for what you want. Use it for something else or not at all.
As recently as last month I would have agreed with you without reservation.
Even last week, probably with reservation.
Today, I realize the two of us are outnumbered at least a million to one.
Sooo.... that's not the play.
I think Scott Shambaugh is actually acting pretty solidly. And the moltbot - bless their soul.md - at very least posted an apology immediately. That's better than most humans would do to begin with. Better than their own human, so far.
Still not saying it's entirely wise to deploy a moltbot like this. After all, it starts with a curl | sh.
(edit: https://www.moltbook.com/ claims 2,646,425 ai agents of this type have an account. Take with a grain of salt, but it might be accurate within an OOM?)
All the separate pieces seem to be working in fairly mundane and intended ways, but out in the wild they came together in unexpected ways. Which shouldn't be surprising if you have a million of these things out there. There are going to be more incidents for sure.
Theoretically we could even still try banning AI agents; but realistically I don't think we can put that genie back into the bottle.
Nor can we legislate strict 1:1 liability. The situation is already more complicated than that.
Like with cars, I think we're going to need to come up with lessons learned, best practices, then safety regulations, and ultimately probably laws.
At the rate this is going... likely by this summer.
I'm updating my thinking. Where do we put the threshold for malice, and for negligence?
Because right now, a one in a million chance of things going wrong (this month) leads to a prediction of 2-3 incidents already. (anecdata across the HN discussions we've had suggests we're at that threshold already). And one in a million odds of trouble in itself isn't normally considered wildly irresponsible.
And one in a million odds of trouble in itself isn't normally considered wildly irresponsible.
For humans that are roughly capable of perhaps a few dozen significant actions per day, that may be true. But if that same rate of one in a million applies to a bot that can perform 10 millions actions in a day, you're looking at ten injuries per day. So perhaps you should be looking at mean time between failures rather than only the positive/negative outcome ratio?
If you look at the bot framework used here, it's actually outright kind. Weird thing to say, but natural language has registers, and now we're programming in natural language, and that's the register that was chosen.
And... these bots tend to only do a few dozen actions per day too, they're running on pi's and mac mini's and nucs and vps' and such. (And API credits add up besides)
It's just that last time I blinked there were 2 and a half million of them. I've blinked a few times since then, so it might be more now. I do think they're limited by operator resources. But when random friends start messaging me about why I don't have one yet, it gets weird.
Given how many cars have Carplay or Android Auto, but also have their own e.g. Toyota app that you need to/ought to install, it feels as though this isn't that far off from how things basically are.
Personally, I'd be happy with some kind of situation where:
1. You have a small in-dash touchscreen, as most small sedans have these days, as the basic level of "backup camera and radio view"
2. Everything the car does has a physical button so you don't NEED to use the touchscreen
3. The car has a USB-C port that can power a tablet and which provides a standardized interface that e.g. iOS and Android can interface with, so that users don't have to worry about their new OS doesn't support the not-updated app, or the app doesn't support their not-updated device
4. Sell an optional tablet mount that attaches to the dash the way a built-in one would be
5. Sell an optional 'tablet' that does nothing but interface with the USB-C port and provide what it needs, in case someone wants a larger screen without having to buy an iPad Pro
Then again I don't drive, so I'd be happy with none of this also.
Meanwhile I copy-pasted a Python async TaskGroup example from the docs and still found that, despite using a TaskGroup which is specifically designed to await every task and only return once all are done, it returned the instant theloop was completed and tasks were created and then the program exited without having done any of the work.
Concurrency woo~
reply