>The Golden Rule: the principle of treating others as you would like to be treated yourself. It is a fundamental ethical guideline found in many religions and philosophies throughout history so there is already a huge consensus across time and cultures around it.
The rules we go by are based on our strengths and weaknesses. They can at most apply to ourselves, and to other forms of life that share certain things with us. Such as feeling pain, needing to sleep, to eat, needing help, needing to breathe air, these generate what we feel as "fear" based on biology etc. You cannot throw these kinds of values on AI, or AGI, as it will possess a wildly different set of strengths and weaknesses to us humans.
You can barely throw these rules on humans as the first thing we do is dehumanize anything that's not in some very tiny classification (depending on the scope of our power, the more powerful we are the smaller it gets).
I strongly disagree. It's easy to utter this string of words, but it's meaningless. It's akin to saying if you have two hands you can perform brain surgery. Technically you can, practically you cannot, as there's other things required for pulling that off, not just having two working hands.
I doubt "stopping it" is up to anyone, it's rather a phenomenon and it's quite clear we're all going to wing it. It's a literal fight for power, nobody stops anything of this nature, as any authority that could stop it will choose to accelerate it, just to guarantee its power.
It is not AI we should fear, it's humans controlling and using it. But everyone who has a shot at it is promising they'll use it for "ultimate good" and "world peace" something something, obviously.
Gunpowder (weapons) and atomic tech (energy, material, weapons) are heavily regulated in most of the planet, as the risks of having free access to them for everyone (company/person) for their own selfish purpose without strong guardrails clearly outweighs the benefits.
The fact that something exists doesn't mean that having it readily available is the only option, particularly if it has potentially disastrous consequences at scale. We are choosing to make it available to everyone fully unregulated, and that is a choice that will prove either beneficial or detrimental to society at some point.
I don't think it is inevitable, I think it is a conscious choice made by a few that have their own and only their own interests in mind.
As a technologist, I am amazed at this tech and see some personal benefits. As a human, I am terrified of the potential net negative effects, and I am having trouble reconciling those two feelings.
The challenge is that enforcing a ban would presumably require strict incursions into personal freedoms organized at a scale where AI-based solutions would be particularly effective and thus tempting, paradoxically.
On the other hand, assuming the dangers are real, you lose by default if you do nothing.
One cannot (in most of the planet) go to the supermarket and buy an M16 and a box of hand grenades, or get a hold of a couple of kg of plutonium cause they want some free energy at home.
We also have rules in place of what one individual/company can and cannot do from the point of view of the greater good. I cannot go and kill my neighbour for my benefit (or purposefully destroy his life) without consequences. A myriad of things are not allowed, and I don't see people complaining about any incursion into personal freedoms.
The reason people have accepted these is that we have already proven that having access to those things could be catastrophic. We haven't proven hat yet with AI.
But I don't see much difference between those established and well accepted rules, and a rule that says: A company cannot release or use for its benefit a technology that will impact the need of humans at scale, because of the impact (again at scale) that it would have in society.
In other words, if you are a company and have the potential to release a product, or buy a product from a provider that would cause mass unemployment, should you be legally allowed to do so? I do not think so.
That’s a fair objection. Having ruminated on it some more, I’ll admit it might be tenable.
As for achieving an effective ban, occupational collapse might be the stronger motivator once workplace adoption broadens and accelerates, but risk of epistemic collapse might register sooner among the general public, already broadly suffering slop.
Like Bill Gates, I wonder why it’s not yet become a theme in mainstream politics.
That's not the human norm though. Doubt an average human way of existing is literal torture for some obscure number of people. I think you're missing the forest for the threes with that BDSM example. You can always find isolated examples as counter-argument for basically anything, but in reality that's an obscure number.
Due to the complexity of our reality a lot of things find themselves on a spectrum, but in numbers things are pretty clear.
I removed the battery but kept the I2C chip/pcb, and fed 5V from USB port via a diode, on the PCB battery connections, seems to work fine. I actually installed a single wire from USB VCC to diode then + battery terminal. But you need to power the Kindle from something that can deliver at least 1.5A for startup peaks. A USB hub does the job fine in my case, and also connects it to a raspberry pi for ssh through USB networking, so no wifi either. Use a good USB cable for power.
Bricked it few times in the process of figuring out more stuff about it, but luckily mine has a UART pads and I was able to restore it every time. A bit more involved as it's 1.8V if I remember right, but if you're careful it should be easy, provided you have the time.
I removed the battery on mine, kept the battery chip and fed 5V into the battery terminals, from Kindle's USB connector, through a diode (so 4.4V-ish). Without a battery it needs something that can deliver at least 1.5A, for short bursts. An older powered usb hub seems to work fine, hub is connected to my raspberry pi, and I use ssh through usb networking, no wifi, no battery, worked fine for months now.
I took an even simpler route. After jailbreak and ssh I just made two scripts on the Kindle, one is triggered every minute, the other every half hour. Both draw the same image from the same location, the 30 minute one just adds a full refresh. This way the display is not fully refreshed every minute, but in time image is degrading so full refresh once every 30 minutes seems work out fine.
This way Kindle has a very simple job, no apps installed no anything, just two extra cronjobs to run the oneliner bash scripts that draw the image. And I use rsync from a raspberry pi to push a new image every minute. That image is assembled with a python script, rpi side, with air quality data. Connects to local mysql server, pulls the values and then assembles it.
A pretty dumb eInk display that could do one thing, that is, receive and blit a bitmap at a given location, would suffice for great many uses. It only needs a way to connect to wifi or zigbee securely, e.g. using TLS.
This is sort of related to a revelation I had once I got into Home Assistant.
The usual idea is a that a smart home becomes filled with smart devices and yet what worked really well for me was having dumb devices with a very smart brain in the middle.
Buttons, switches, lamps, and sensors are commodify Zigbee devices and the entirety of the logic and programming is done on the Home Assistant server. The downside is latency.
Usually you can bind ZigBee devices together. I have multiple IKEA "rodret" switches bound to generic ZigBee smart plugs from Aliexpress. Works great, with minimal latency.
With zha, you can bind them together from the Home Assistant device page.
I usually favor an architecture that can work without Home Assistant, such as standalone ZigBee dimmers, or contactors that can work with existing wiring. Home Assistant brings automation on top, but it doesn't matter much if it breaks (I mostly notice the shutters not opening with sunrise). Then Internet connectivity can bring additional features, but most things still work if it's down.
I'd say it has been pretty solid for years, and I don't stress too much when I have server issues.
I thought this was pretty much a known fact by now. To make more money. They sell the data, or monetize it somehow. They disguise doing it under all kinds of "features" which indeed might be useful for some people.
What should ring your alarm bells is any device that needs you to make an account, at least once when setting it up. That's valuable data, who/where/email/phone number etc. If you cannot fully use the product without at least one initial access to the internet, your data will be monetized, that's the reason you're not able of using it, they need to get something out of you.
Of-course there's features that don't work, or make any sense, without internet access. But if you cannot wash your clothes without an account/initial access to the internet...that's sus.
I always thought the "alpha male" is the one who calls the shots. That's it. I never saw any relation to animals. Most likely the "alpha animal" model was used as a parallel, but you cannot deny the role. It's self evident almost everywhere. Someone is calling the shots. If you do not obey them there are consequences.
At your workplace that is your boss. If you do not do what is required of you, the consequences are that you get fired. They are real and tangible and unavoidable, if you disobey.
How does disproving the alpha thing in wolves change anything about how we interact? People who hold power over other people will still use it, no matter what we call it. This is a simple game theory issue, changing words and descriptions won't change the fundamentals of it.
The role for what people "incorrectly" called "alpha male" is not one we "agree" on, it's one that is self evident by the power such individual holds in that group. This has nothing to do with what I or you or anyone thinks of it. You can ignore such an individual, or you cannot. If you can ignore them, they do not have that power over you. If you cannot ignore the repercussions then they do indeed have that power over you. That's pretty much all there is to it. Changing what we call it won't change their behavior or the outcome of these kinds of interactions.
For example gorillas do have alpha-males in the group, they are the silverbacks. Not obeying them leads to real consequences.
edit: Just for clarity's sake, I am no fan of "that" masculinity model, I'm just talking about the reality of things, almost everywhere on this planet. Of-course there's all kinds of exceptions but they aren't really important in the grand scheme of things.
I don't think it's anything other than electric activity, but it's clearly not "some electrical signal". It's the totality of them. They are many, and complicated. And they seem to be required for consciousness. Doubt there's any proven conscious state in a human, lacking electrical activity in the brain.
Re just electrical activity, I think you can add empirical evidence it's chemical as well as beer and other substances can affect your perception of reality.
There was an AMA about conjoined twins on Reddit a couple of years ago, and one of the interesting parts was that they could each sense how the other twin is feeling in terms of emotions. This is due to a lot of emotional states being based on hormones that flow through their shared blood stream.
The rules we go by are based on our strengths and weaknesses. They can at most apply to ourselves, and to other forms of life that share certain things with us. Such as feeling pain, needing to sleep, to eat, needing help, needing to breathe air, these generate what we feel as "fear" based on biology etc. You cannot throw these kinds of values on AI, or AGI, as it will possess a wildly different set of strengths and weaknesses to us humans.
reply