Write speed is probably the least important metric for people that are considering something like this. After everything with storage and longevity is taken care of, improving write speeds is a nice to have, but not the important part.
How do you need to supervise this "less" than an LLM that you can feed input to and get output back from? What does it mean that it's "running continuously"? Isn't it just waiting for input from different sources and responding to it?
As the person you're replying to feels, I just don't understand. All the descriptions are just random cool sounding words/phrases strung together but none of it actually providing any concrete detail of what it actually is.
I’m sure there are other ways of doing what I’m doing, but openclaw was the first “package it up and have it make sense” project that captured my imagination enough to begin playing with AI beyond simple copy/paste stuff from chatGPT.
One example from last night:
I have openclaw running on a mostly sandboxed NUC on my lab/IoT network at home.
While at dinner someone mentioned I should change my holiday light WLED pattern to St Patrick’s day vs Valentine’s Day.
I just told openclaw (via a chat channel) the wled controller hostname, and to propose some appropriately themes for the holiday, investigate the API, and go ahead and implement the chosen theme plus set it as the active sundown profile.
I came back home to my lights displaying a well chosen pattern I’d never have come up with outside hours of tinkering, and everything configured appropriately.
Went from a chore/task that would have taken me a couple hours of a weekend or evening to something that took 5 minutes or less.
All it was doing was calling out to Codex for this, but it acting as a gateway/mediator/relay for both the access channel part plus tooling/skills/access is the “killer app” part for me.
I also worked with it to come up with a promox VE API skill and it’s now repeatable able to spin up VMS with my normalized defaults including brand new cloud init images of Linux flavors I’ve never configured on that hypervisor before. A chore I hate doing so now I can iterate in my lab much faster. Also is very helpful spinning up dev environments of various software to mess with on those vms after creation.
I haven’t really had it be very useful as a typical “personal assistant” both due to lack of time investment and running against its (lack of) security model for giving it access to comms - but as a “junior sysadmin” it’s becoming quite capable.
Great story. And it distills what the claw stuff is all about, in terms of utility is actually here. It's the multitude of "channels", out of the box, that you can enable that allow you to speak with the actual AI agent with access to the configured environment.
Yeah, and if you give another human access to all your private information and accounts, they need lots of supervision, too; history is replete with examples demonstrating this.
But there's typically plenty at stake for the recipient. If my accountant tried to use my financial information in some improper way, he'd better have a good plan for what comes next.
I don't have one going but I do get the appeal. One example might be that it is prompted behind the scenes every time an email comes in and it sorts it, unsubscribes from spam, other tedious stuff you have to do now that is annoying but necessary. Well that is something running in the background, not necessarily continuously in the sense that it's going every second, but could be invoked at any point in time on an incoming email. That particular use case wouldn't sit well with me with today's LLMs, but if we got to a point where I could trust one to handle this task without screwing up then I'd be on board.
Of course it can "model time". It has access to system clock and know its heartbeat rate. Can you "model time" when you are asleep? Whatever "model time" means, it sounds like projection to be frank.
> Or feeling things for that matter.
Philosophical zombie experiment, the conclusion is qualia dont matter, only IO. If two systems have the same behavior there is no meaningful difference.
"What I find annoying is repetitive stuff that's just typing"
..
"Where I can't trust AI is if it needs to copy paste / duplicate code"
???
AI takes away the "boring", "tedius" parts of coding for you, yet you at the same time don't trust it to even just duplicate code from one place to another?
It's so interesting the amount of people with these big AI fears who think that AI is going to replace most knowledge work within a short period of time, singularity, etc., but that same AI that takes over everything .. isn't going to be smart enough to operate robotics to do plumbing or welding? Those things will be outside the limits of its intelligence?
It's been my belief for over 20 years now that dedicated/instrumented roads for autonomous vehicles is the only way autonomous cars will ever be a thing, at mass scale, other than via the invention of true AGI (which I still don't think we're close to). Such roads becoming a thing within the next 6 years though, I'd doubt.
I think there might be a trial stretch of road somewhere in a few years, although surely not widespread. Such a thing feels inevitable to me, though, if we’re going to have self-driving cars at all.
Isn't a major feature of consensus algorithms for them to be tolerant to failures? Even basic algorithms take error handling into account and shouldn't be taken out by a bit flip in any one component.
Yes. To clarify, my understanding of _this_ particular incident was wrong because it was based on reading the report of a previous incident.
But for the 2008 incident I read and linked the report, that was what happened. The ADIRU unit did probably get a SEU event and that should have been mitigated by the design of the ELAC unit. The ELAC unit failed to mitigate it so that's the part that they probaby fixed.
1) Countries have major political interest in whether other countries have nuclear reactors
2) Countries are already, at large scale, manipulating discourse across the internet to achieve their political goals
Then of course it follows that any comment thread on a semi-popular or higher site about whether a country should build more nuclear reactors is going to be heavily manipulated by said countries. That's where (most) of the insane people in such threads are probably coming from.
How are we supposed to survive as a civilization with such corrupted channels of communication?
What is, according to you, the political interest?
There are countries that have interest of having gas or oil bought from them.
It is not clear if they are pro or against other countries going nuclear: on one hand, nuclear will replace part of their market. On the other hand, lobbying to move towards nuclear may impede progress in replacing gas and oil by renewable (a strategy would be to lobby so that the nuclear project starts and then lobby so that the project stagnates and never delivers).
There are countries that have interest in seeing nuclear adopted because they have a market for the ore extraction or waste processing. There are countries that have interest in seeing nuclear not adopted because they have a market around other generations.
Finally, some countries may want to see their neighbors adopt nuclear: the neighbor will pay all the front bills and take all the risk (economical but also PR, or the cost of educating experts, ...), and if they succeed, they will provide import energy very cheap that can fill the gaps the country did not wanted to invest in.
So it is not clear if there is just one stream of lobbying. The reality is probably that every "sides" does somehow contain manipulative discourse from foreign countries.
Does this apply also to fossil energy threads? Countries have a major political interest whether other countries use fossil energy, to mitigate the climate catastrophe and ramp down fossils use.
I really, really, wish somebody would actually put together a real reliability report. You know, by actually getting hard data on what repairs different models need, how often different models break down, how long different models last, etc. That's how you should rate reliability.
The consumer reports model of just surveying a random collection of people about what they personally think about the reliability of cars is not hard data. They don't collect any data themselves. They just take random people's beliefs as the data. It's also an oroborous, as what they rate as reliable/unreliable one year will then influence what people's beliefs are when they're surveyed the next year about what they believe is reliable.
All major communication forums on the internet have been mass manipulated/poisoned by countries across the world for well over a decade now. A huge chunk of all internet speech is inauthentic. In my mind, AI videos really don't degrade the situation much further. The internet as a communication medium has already been completely compromised for a long time.
"[..] deploying a solar array with photovoltaic cells – something essentially equivalent to what I have on the roof of my house here in Ireland, just in space. It works, but it isn't somehow magically better than installing solar panels on the ground – you don't lose that much power through the atmosphere"
As an armchair layman, this claim intuitively doesn't feel very correct.
Of course AI is far from a trustworthy source, but just using it here to get a rough idea of what it thinks about the issue:
"Ground sites average only a few kWh/m²/day compared to ~32.7 kWh/m²/day of continuous, top-of-atmosphere sunlight." .. "continuous exposure (depending on orbit), no weather, and the ability to use high-efficiency cells — all make space solar far denser in delivered energy per m² of panel."
reply