Everyone cites some niche "human-only" jobs to argue AI won't replace labor. But most of the economy runs on things like document processing, logistics, retail, and factories. High-volume, repeatable, rule-driven tasks, and in those areas, we're already on the brink of full automation. Autonomous retail stores, delivery fleets, and smart factories are either here or imminent. It's not about AI scratching backs, it's about replacing jobs that move trillions of dollars.
Sure, top-tier researchers, system engineers, and other highly skilled knowledge workers will still be in demand, but for mass labor disruption, AI doesn't need to beat them, it only needs to outperform the average human
You're describing valueless automation. We can build an assembly line and mass produce cars, that only has value if society is restructured. The food delivery industry only moves a trillion (globally) because it's incredibly wasteful, not due to value. Most of the value in going to a restaurant was in the experience and culture, the food is just a blend of fats, carbs and protein, but you pay more for the "luxury" of eating in your house or at work.
You'll have cities made to serve cars and food made to serve delivery and worker drones. In the pursuit of optimization you'll end up back at the same place, when there was only one cafeteria in walking distance.
Anyway we aren't "on the brink of full automation" that's ridiculous, people always think this, because they have no idea how brittle automated systems are. To get a generally intelligent robot that operates in the real world you have to go WAY beyond replacing knowledge workers. The brain only uses 1W more when it's working at full tilt, 5% more. For any physical job the body is using. The full body at rest uses 100W, walking that's 300W, manual labor 600W a full sprint could peak at 2000W. That's an absurd range made only possible due to trillions of cells packed with ATP and billions of microscopic capillaries full of glucose that get sucked into your muscles the second you use them. Automation only works in closed systems, give it 2000 years maybe someone makes AGSI, then the robotics problem becomes approachable, but if it were smart it'd just declare it impossible without biotech.
Energy is just one component of cost per unit of work, and in most modern economies it's a very small one.
Industry has always traded energy efficiency for productivity. Excavators burn huge amounts of fuel compared to humans with shovels, data centers consume megawatts compared to the brain's ~20W, and forklifts are nowhere near biologically efficient. They still win because the economic cost per unit of work is lower.
In the case of AI the key difference is how the systems scale.
Human labor scales linearly: if you want 10x more output, you hire 10x more workers, and the cost scales roughly the same. A human brain might run on ~20W, but each worker is still €20–€50+ per hour and can only do one task at a time.
AI systems scale more like software infrastructure. Once the model and servers exist, the marginal cost of additional tasks is mostly compute and electricity. A data center might burn far more energy than a human brain per task, but it can handle thousands or millions of tasks in parallel and run 24/7. The cost per task can end up being cents even if the system is much less energy efficient "per brain".
Labor is expensive because you're not just paying for the task. You're paying for the worker's entire life infrastructure. Wages have to cover housing, food, healthcare, transportation, retirement, taxes, etc.., So the price of labor largely reflects the cost of sustaining a human being in society, not just the marginal cost of performing the work
Could it be that the only large safety-first companies are the ones forced by law (either proactively, or due to reliable legal accountability if things go wrong) to be safety-first?
> There are only so many safety-first companies and products
There are only so many companies that think of themselves as safety-first. In practice, basically all companies work on things that should be safety-first.
Does your software store user data? Congrats, you are now on the hook for GDPR and a bunch of similar data handling regulations.
Does your software include a messaging component? You are now responsible for moderating abusive actors in your chat.
Does your software allow users to upload images? Now you are a potential distribution vector for CSAM.
And so on... safety isn't just for things which can cause immediate death and dismemberment
There’s a difference between "safety matters" and “safety is the primary constraint".
Most companies manage risk to an acceptable level while optimizing for speed and cost. Aerospace companies optimize for minimizing catastrophic failure, even at extreme expense.
Treating a potential GDPR fine as equivalent to a flight-control failure ignores that society, regulators, and markets treat those risks very differently.
The inconvenience and economic cost of your Discord messages leaking is not the same category of harm as your pacemaker controller failing.
And because the majority of economic activity sits in that lower-criticality category, it would not be surprising if highly specialized, safety-critical human software engineering becomes more of a niche, while much of routine software development becomes increasingly automated or commoditized.
> Treating a potential GDPR fine as equivalent to a flight-control failure ignores that society, regulators, and markets treat those risks very differently
Agreed, though I think that if GDPR fines were actually being levied at the recommended 4% of global revenue, we'd start treating them more similarly to a 737 crash.
> The inconvenience and economic cost of your Discord messages leaking is not the same category of harm as your pacemaker controller failing
Sort of depends who they leak to. Your teen classmates who bully you to suicide? Your abusive ex who is trying to track you down to kill you? The 3-letter agency who is trying to rendition your family to an internment camp?
There are a lot of seemingly benign failure modes that become extremely lethal given the right circumstances. And because we acknowledge the potential lethality of something like a pacemaker failure, we have massive infrastructure dedicated to their mitigation (EMT teams, emergency external pacemakers, surgical teams who can rapidly place new leads, etc). For things society judges less important, mitigations are often few and far between
That's a very anthropocentric view. Technology isn't a series of deliberate inventions by us, but an autonomous, self-organizing process. The development of a spear, a bow, or a computer is an evolutionary step in a chain of technological solutions that use humans as their temporary biological medium.
The human brain is not the starting point or center of this process. It is itself a product of biological evolution, a temporary information-processing system. Its limitations such as imperfect memory, are simply constraints of its biological origin. The tools we develop, from writing to digital storage are not just supplements to human ability, but the next stage in a system that is moving beyond its biological origins to find more efficient non-biological forms of information storage and processing.
Human pride in creation is a misinterpretation. We are not the masters of technology. We're just the vehicle of it. Part of a larger process of technological self-improvement that is now moving towards an era where it might no longer require us
I think your understanding of the words "autonomous" and "self-organizing" is somewhat lacking. If there were no humans, those things would not happen.
Further, if it were a byproduct of the presence of humans, then the backpath of invention would be repeated multiple times and spread out across human history, but, for instance, despite the presence of saltpeter, sulfur, and charcoal, magnetite, wood and ink across the planet, the compass, gunpowder, papermaking and printing were essentially exclusively invented in China and only spread to Europe through trade.
The absence of the four great inventions of china in the Americas heavily implies that technology is not a self-organizing process but rather a consequence of human need and opportunity meeting at cross ends.
For instance, they had the wheel in America, but no plow animals, so the idea was relegated to toys despite wheelbarrows being a potentially useful use for the wheel.