Hacker Newsnew | past | comments | ask | show | jobs | submit | cmiles8's commentslogin

As written this would be the end of Anthropic. AWS, Microsoft et al are all suppliers of the DoW and as written they must immediate stop doing business with Anthropic. Will be interesting to see how this unfolds.

TACO

This is more complicated than just hand wavy spending expectation resets. Other companies were taking these “commitments” and gearing up for capital investments to meet all that demand which is now vaporizing. That creates a big mess as the hype AI hype machine starts to unravel.

This looks very much like a careful move to deflate the bubble without popping it, but we’ve likely passed that point.


The markets are skeptical at the moment. A bunch of tech IPOs in the last few years have tanked 70+% since the IPO and that can be devastating to a company.

Also there’s a ton of overhead associated with being public that nobody really wants to do so companies now stay private as long as they can get away with.


The AI bros are saying everyone will be out of work in 5 years.

Economists and businesses are calling BS and saying AI is cool, but basically adding zero measurable value with 95% of AI projects failing.

The truth is likely somewhere in the middle, but it seems unlikely this bubble can continue much longer.


The duration of this bubble so far goes to show how incredibly rich the investors are, burning trillions of dollars over the years in a wild speculation that they will be able to get everyone else out of work and then having the power to decide who will be allowed to live and who will have to die in the fight over the last breadcrumbs. In the end they will be the ones who can afford to buy private armies to protect themselves from the hungry masses.

There are serious balance sheet concerns for these companies with exposure to OpenAI, Anthropic and such.

It’s all fun and games till it’s not. All this capital investment is going to start hitting earnings as massive deprecation and/or mark-to-market valuation adjustments and if the bubble pops (or even just cools a bit) the math starts to look real ugly real quick.


The market is not there at all though is it? Nobody is paying what it actually costs to deliver AI services. It is not clear to me that it is cheaper than just paying people to do the work.

Someone did a calculation using heat generated - energy usage (which is ultimately the base cost of the universe) - and the human brain and body is just incredibly most cost efficient than how we're doing AI. So for basic tasks it's just absurdly expensive to be using AI instead of a human.

Generating electricity is much cheaper than paying a human. The costs of employing a person have little to do with their energy usage

we don't pay humans in the food they consume

We don't pay for GPU's in the energy they consume either.

Even the latest models are quite easily fooled about if something is true or not, at which point they then confidently declare completely wrong information to be true. They will even strongly debate with you when you push back when you say hey that doesn’t look right.

It’s a significant concern for any sort of AI use at scale without a skilled and knowledgeable human expert on the subject in the loop.


I like AI and use it daily, but this bubble can’t pop soon enough so we can all return to normally scheduled programming.

CEOs are now on the downside of the hype curve.

They went from “Get me some of that AI!” after first hearing about it, to “Why are we not seeing any savings? Shut this boondoggle down!” now that we’re a few years into bubble, the business math isn’t working, and they only see burning piles of cash.


"return to normally scheduled programming" is probably not the exact phrasing you want to use. :)

I consume a lot of different content on a lot of different places. Every site or app has its vibe and communal beliefs. They rarely if ever agree on anything, but they all agree we're in a massive bubble.

I don't have a point, just that it's an unlikely unity.


This is the standard stupidity here based on emotion and denial. This is the narrative that people want to hear.

Of course, trying to automate with chatGPT 4o was stupid. Trying to automate with Sonnet 4.6 will work better. Trying to automate with the models a year from now will work all the better.

To believe we are going to stop and go back to 2019 at this stage is seriously delusional.

I wish it were true. I would love to go back to 2019 but we obviously are not. We never go backwards.


“Regularly scheduled programming” includes progress, not stasis. It just means the AI hype firehouse runs out of money and we rerun to normal sane progress and productivity enhancements without all the overpricing and under delivering of the last few years.

This is the elephant in the room nobody wants to talk about. AI is dead in the water for the supposed mass labor replacement that will happen unless this is fixed.

Summarize some text while I supervise the AI = fine and a useful productivity improvement, but doesn’t replace my job.

Replace me with an AI to make autonomous decisions outside in the wild and liability-ridden chaos ensues. No company in their right mind would do this.

The AI companies are now in a extinctential race to address that glaring issue before they run out of cash, with no clear way to solve the problem.

It’s increasingly looking like the current AI wave will disrupt traditional search and join the spell-checker as a very useful tool for day to day work… but the promised mass labor replacement won’t materialize. Most large companies are already starting to call BS on the AI replacing humans en-mass storyline.


Part of the problem is the word "replacement" kills nuanced thought and starts to create a strawman. No one will be replaced for a long time, but what happens will depend on the shape of the supply and demand curves of labor markets.

If 8 or 9 developers can do the work of 10, do companies choose to build 10% more stuff? Do they make their existing stuff 10% better? Or are they content to continue building the same amount with 10% fewer people?

In years past, I think they would have chosen to build more, but today I think that question has a more complex answer.


AI says:

1. The default outcome: fewer people, same output (at first) When productivity jumps (e.g., 5–6 devs can now do what 10 used to), most companies do not immediately ship 10% more or make things 10% better. Instead, they usually:

Freeze or slow hiring Backfill less when people leave Quietly reduce team size over time

This happens because:

Output targets were already “good enough” Budgets are set annually, not dynamically Management rewards predictability more than ambition

So the first-order effect is cost savings, not reinvestment.

Productivity gains are initially absorbed as efficiency, not expansion.

2. The second-order effect: same headcount, more scope (but hidden) In teams that don’t shrink, the extra capacity usually goes into things that were previously underfunded:

Tech debt cleanup Reliability and on-call quality Better internal tooling Security, compliance, testing

From the outside, it looks like:

“They’re building the same amount.”

From the inside, it feels like:

“We’re finally doing things the right way.”

So yes, the product often becomes “better,” but in invisible ways.

3. Rare but real: more stuff, faster iteration Some companies do choose to build more—but only when growth pressure is high. This is common when:

The company is early-stage or mid-scale Market share matters more than margin Leadership is product- or founder-led There’s a clear backlog of revenue-linked features

In these cases, productivity gains translate into:

Faster shipping cadence More experiments Shorter time-to-market

But this requires strong alignment. Without it, extra capacity just diffuses.

4. Why “10% more” almost never happens cleanly The premise sounds linear, but software work isn’t. Reasons:

Coordination, reviews, and decision-making still bottleneck Roadmaps are constrained by product strategy, not dev hours Sales, design, legal, and operations don’t scale at the same rate

So instead of:

“We build 10% more”

You get:

“We missed fewer deadlines” “That migration finally happened” “The system breaks less often”

These matter—but they’re not headline-grabbing.

5. The long-run macro pattern Over time, across the industry:

Individual teams → shrink or hold steady Companies → maintain output with fewer engineers Industry as a whole → builds far more software than before

This is the classic productivity paradox:

Local gains → cost control Global gains → explosion of software everywhere

Think:

More apps, not bigger teams More features, not more people More companies, not fatter ones

6. The uncomfortable truth If productivity improves and:

Demand is flat Competition isn’t forcing differentiation Leadership incentives favor cost control

Then yes—companies are content to build the same amount with fewer people. Not because they’re lazy, but because:

Efficiency is easier to measure than ambition Savings are safer than bets Headcount reductions show up cleanly on financials


One of the most insightful HN comments I've read in years. Thank you! I'm curious about what you've read and are reading.

ha ha, this is the response from Microsoft Copolit when I asked:

If 5 or 6 software developers can do the work of 10, do companies choose to build 10% more stuff? Do they make their existing stuff 10% better? Or are they content to continue building the same amount with 10% fewer people?


There’s a middle road where AI replaces half the juniors or entry level roles, the interns and the bottom rung of the org chart.

In marketing, an AI can effortlessly perform basic duties, write email copy, research, etc. Same goes for programming, graphic design, translation, etc.

The results will be looked over by a senior member, but it’s already clear that a role with 3 YOE or less could easily be substituted with an AI. It’ll be more disruptive than spell check, clearly, even if it doesn’t wipe it 50% of the labor market: even 10% would be hugely disruptive.


I think you're really overstating things here. Entry level positions are the tier at which replacement of senior positions happen. They don't do a lot, sure, but they are cheap and easily churnable. This is precisely NOT the place companies focus on for cutbacks or downsizing. AI being acceptable at replacing unskilled labor doesn't mean it WILL replace it. It has to make business sense to implement it.

If they're cheap and churnable, they're also the easiest place to see substitution.

Pre-AI, Company A hired 3 copywriters a year for their marketing team. Post-AI, they hire 1 who manages some prompting and makes some spot-tweaks, saving $80K a year and improving the turnaround time on deliverables.

My original comment isn't saying the company is going to fire the 3 copywriters on staff, but any company looking at hiring entry-level roles for tasks that AI is already very good at would be silly to not adjust their plans accordingly.


I mean you're half right. Companies seek to automate some of their transactional labor and reduce their overall head count, but they also want a pool of low paid labor to rotate when they do layoffs, which are usually focused on the highest paid slices of the labor chain.

There's a couple issue with LLMs. The first is that by structure they make a lot of mistakes and any work they do must be verified, which sometimes takes longer than the actual work itself, and this is especially true in compliance or legal contexts. The second is the cost. If a company has a choice to outsource transactional labor to Asia for $3 an hour or spend millions on AI tokens, they will pick Asia every single time. The first constraint will never be overcome. The second has to be overcome before AI even becomes a relevant choice, and the opposite is actually happening. $ per kwh is not scaling like expected.

My prediction is that LLMs will replace some entry level positions where it makes sense, but the vast majority of the labor pool will not be affected. Rather, AI might become a tool for humans to use in certain specific contexts.


Not really though:

1. Companies like savings but they’re not dumb enough to just wipe out junior roles and shoot themselves in the foot for future generations of company leaders. Business leaders have been vocal on this point and saying it’s terrible thinking.

2. In the US and Europe the work most ripe for automation and AI was long since “offshored” to places like India. If AI does have an impact it will wipe out the India tech and BPO sector before it starts to have a major impact on roles in the US and Europe.


1) Companies are dumb enough to shoot themselves in the foot over a single quarter's financials - they certainly aren't thinking about where their middle management is going to come from in 5 or 10 years.

2) There's plenty of work ripe for automation that's currently being done by recent US grads. I don't doubt offshored roles will also be affected, but there's nothing special about the average entry-level candidate from a state school that'll make them immune to the same trends.


To think companies worry about protecting the talent supply chain is to put your fingers in your ears and ignore your eyes for the past 5-10 years. We were already in a crisis of seniority where every single role was “senior only” and AI is only going to increase that.

I actually think the opposite will happen. Suddenly, smart AI-enabled juniors can easily match the productivity of traditional (or conscientious) seniors, so why hire seniors at all?

If you are an exec, you can now fire most of your expensive seniors and replace them with kids, for immediate cash savings. Yeah, the quality of your product might suffer a bit, bugs will increase, but bugs don't show up on the balance sheet and it will be next year's problem anyway, when you'll have already gone to another company after boasting huge savings for 3 quarters in a row.


> Suddenly, smart AI-enabled juniors can easily match the productivity of traditional (or conscientious) seniors, so why hire seniors at all?

I guess we'll see, but so far the flattening curve of LLM capabilities suggest otherwise. They are still very effective with simpler tasks, but they can't crack the hardest problems like a senior developer does.


1. Sure they will! It's a prisoner's dilemma. Each individual company is incentivized to minimize labor costs. Who wants to be the company who pays extra for humans in junior roles and then gets that talent poached away?

2 Yes, absolutely.


The cost of juniors have dropped enough where it's viable now.

You can get decent grads from good schools for $65k.


As far as 1 goes, how do you explain American deindustrilization and e. g. its auto industry.

And why would it materialize? Anyone who has used even modern models like Opus 4.6 in very long and extensive chats about concrete topics KNOWS that this LLM form of Artificial Intelligence is anything but intelligent.

You can see the cracks happening quite fast actually and you can almost feel how trained patterns are regurgitated with some variance - without actually contextualizing and connecting things. More guardrailing like web sources or attachments just narrow down possible patterns but you never get the feeling that the bot understands. Your own prompting can also significantly affect opinions and outcomes no matter the factual reality.


The great irony is this episode is exposing those who are truly intelligent and those who are not.

Folks feel free to screenshot this ;)


It doesn’t have to replace us, just make us more productive.

Software is demand constrained, not supply constrained. Demand for novel software is down, we already have tons of useful software for anything you can think of. Most developers at google, Microsoft, meta, Amazon, etc barely do anything. Productivity is approaching zero. Hence why the corporations are already outsourcing.

The number of workers needed will go down.


Well done sir, you seem to think with a clear mind.

Why do you think you are able to evade the noise, whilst others seem not to? IM genuinely curious. Im convinced its down to the fact that the people 'who get it' have a particular way of thinking that others dont.


The narrative about AI replacing humans is just a way to say 'we became 2x more productive' instead of saying 'we cut 50% jobs', which sounds better for investors. The real reason for job cut is COVID overhiring plus interest rate going up. If you remember, Twitter did the job cuts without any AI-related narrative.

1 you are massively assuming less than linear improvement, even linear over 5 years puts LLM in different category

2 more efficient means need less people means redundancy means cycle of low demand


1 it has nothing to do with 'improvement'. You can improve it to be a little less susceptible to injection attacks but that's not the same as solving it. If only 0.1% of the time it wires all your money to a scammer, are you going to be satisfied with that level of "improvement"?

> You can improve it to be a little less susceptible to injection attacks

That’s exactly the point the rapid rate of improvement is far form slow polish in 10 years it will be everywhere doing everything


I think you missed the other half of the sentence. It's not converging on 'immune' no matter how fast it improves.

OK. Let's take what you've stated as a truth.

So where is the labor force replacement option on Anthropic's website? Dario isn't shy about these enormous claims of replacing humans. He's made the claim yet shows zero proof. But if Anthropic could replace anyone reliably, today why would they let you or I take that revenue? I mean they are the experts, right? The reality is these "improvements" metrics are built in sand. They mean nothing and are marketing. Show me any model replacing a receptionist today. Trivial, they say, yet they can't do it reliably. AND... It costs more at these subsidized prices.


Why is the bar replacing a receptionist ? At the low end It will take over tasks and companies will need less people, at the top end it will take over roles. What’s the point you are making, if it can’t do bla now it never will ?

Then define the bar. You're OK with all of these billionaires just saying "we're replacing people in 6-60 months" with no basis, no proof, no validation? So the onus is now on the people who challenge the statement?

Why is the bar not even lower you ask? Well I guess we could start with replacing lying, narcissistic CEOs.


LLMs haven't been improving for years.

Despite all the productizing and the benchmark gaming, fundamentally all we got is some low-hanging performance improvements (MoE and such).


It sure did: I never thought I would abandon Google Search, but I have, and it's the AI elements that have fundamentally broken my trust in what I used to take very much for granted. All the marketing and skewing of results and Amazon-like lying for pay didn't do it, but the full-on dive into pure hallucination did.

You’re not supposed to ask such logical questions. It kills the AI vibe.

"We are asking you to pay the subscription, not to think! Think of the investors!"

The cracks are showing, and all the “AI is going to eliminate 50% of white collar jobs” fear mongering is simply signaling we’re in the final stages before the bubble implosion.

The AI bros desperately need everyone to believe this is the future. But the data just isn’t there to support it. More and more companies are coming out saying AI was good to have, but the mass productivity gains just aren’t there.

A bunch of companies used AI as an excuse to do mass layoffs only to then have to admit this was basically just standard restructuring and house cleaning (eg Amazon).

Theres so much focus on white collar jobs in the US but these have already been automated and offshored to death. What’s there now is truly a survival of the fittest. Anything that’s highly predicable, routine, and fits recurring patterns (ie what AI is actually good at) was long since offshored to places like India. To the extent that AI does cause mass disruption to jobs, the India tech and BPO sectors would be ground zero… not white collar jobs in the US.

The AI bros are in a fight for their careers and the signal is increasingly pointing to the most vulnerable roles out there at the moment being all those tangentially tacked onto the AI hype cycle. If real measurable value doesn’t show up very soon (likely before year end) the whole party will come crashing down hard.


I just don't agree with this at all.

Right now is the good time for the job market. The S&P is at an all time high.

In the next recession, I expect massive layoffs in white collar work and there is no way those jobs are coming back on the other side.

40-50% of US white collar work hours are spent on procedural, rules-based tasks. Then another large chunk is managing the people doing procedural, rules-based tasks and support of people doing rules based tasks. Salary and benefits are 50% of operating costs for most business.

Maybe you do something really interesting and unique but that is just not what most white collar workers in the US are doing.

I know for myself, these are the final days of white collar work before I am unemployable as a white collar worker. I don't think the company I work for will exist either in 5 years. It is not a matter of Claude code being able to update a legacy system or not. It is that the tide hasn't really gone out in 15 years and all these zombie companies are going to get wiped out at the same time AI is automating the white collar jobs. Delaying the business cycle from clearing over and over is not a free lunch, it is a bill that has been stacking up for a long time.

On the other side, the business as usual of today won't be an option.

From my own white collar experience, I think if you view procedural rules-based tasks as a graph, the automation of any one task depends so much on other tasks being automated. So it will seem like the automation is not working but at some point you get a contagion of automation. Then so much automation will happen at once.


The openclaw stuff for me is a prime signal we are now reaching the maximal size of the bubble before it pops - the leaders of the firms at the frontier are lost and have no vision - this a huge warning signal. E.g Steve Jobs was always ahead of the curve in the context of the personal computer revolution - there was no outside individual who had a better view of where things were heading.

There isnt gonna be a huge event in the public markets though, except for Nvidia, Oracle and maybe MSFT. Firms that are private will suffer enormously though.


My hunch is the year of the AI Bubble is the same one as the Linux Desktop

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: