Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI is fine. The hype is annoying. What's even worse though are the incredible amounts of money and energy that are being thrown at it, with no regard for the consequences, in times of record inequality and looming climate apocalypse.

AI is the red herring that'll waste all our attention until it's too late.



AI is one of the causes that climate change is accelerating, which is another in a long list of reasons to hate it.


Im not sure I follow. AI barely consumes energy compared to other industries and instead of focusing on the heavy hitters first wasting time on the climate impact on AI doesn’t seem useful


This is wrong. AI uses ~4% of the US grid, and projections are that it will grow to 10%+ in the next 6 years.

And most of that new capacity will be natural gas. That increase would basically whipe out the reduction in CO2 emissions the USA has had since 2018.


Compare that to ~30% of all energy use for transportation. So approximately 40%*4% = 1.6% vs 30%. I find your correction to be more wrong that the initial statement.

> And most of that new capacity will be natural gas. That increase would basically whipe out the reduction in CO2 emissions the USA has had since 2018.

Emissions in 2018 were ~5250M metric ton and in 2024 it was 4750M. That is a reduction of 10% total emissions. Without going into calculations of green electricity and such, its still safe to say AI using 10% of the grid would not completely wipe out the reduction.

[0]: https://www.statista.com/statistics/183943/us-carbon-dioxide...


> Compare that to ~30% of all energy use for transportation

Transportation, especially ALL transportation, does a LOT. You're looking for ROI not the absolute values. I think it's undeniable that the positive economic effect of every car, truck, train, and plane is unfathomably huge. That's trains moving minerals, planes moving people, trucks transporting goods, and hundreds of combinations thereof, all interconnected. Literally no economic activity would happen without transportation, including the transition to green energy sources, of which would improve the emissions from transportation.

I think it might be more emissions-efficient at generating value than AI by a factor exceeding the 7.5x energy use. Moving rocks from (place with rocks) to (place that needs rocks) continues to be just an insanely good thing for humanity.

Also, I'm not sure about your math. 4% would be 4% of the whole like in a pie chart, not 4% of the remainder after removing one slice. 4% AI, 30% transportation, 66% other. I don't know where that 40% is from.


> Also, I'm not sure about your math. 4% would be 4% of the whole like in a pie chart, not 4% of the remainder after removing one slice. 4% AI, 30% transportation, 66% other. I don't know where that 40% is from.

40% is for energy use in the US in the form of electricity. It was a rough number that I pulled from my memory. It is roughly right though. Check https://www.eia.gov/energyexplained/us-energy-facts/

AI is not currently 4% of the energy market of the US. Only the grid. I should have been more clear about the ALL ENERGY vs GRID distinction.

> I think it might be more emissions-efficient at generating value than AI by a factor exceeding the 7.5x energy use. Moving rocks from (place with rocks) to (place that needs rocks) continues to be just an insanely good thing for humanity.

I really made no statement on the value of doing things. Transportation is obviously very valuable. I just wanted a more fact based conversation.


> Compare that to ~30% of all energy use for transportation. So approximately 40%*4% = 1.6% vs 30%. I find your correction to be more wrong that the initial statement.

I don't follow. The comparison is 30% of energy use for transportation vs 4% for AI, and soon 30% for transportation vs 10% for AI.


The grid is not all energy use. To get the numbers on an even playing field you need to compensate for that only ~40% of energy goes through the grid.

And that leaves a 6:1 ratio assuming projections run true. It very well might be possible to get efficiency wins from the transportation sector that outweigh growth in AI.


The 4% figure is for data centres generally afaik, and it's pre-AI

Of course nearly all of that growth is going to be AI


Pretty large amounts of energy go towards training large language models. Running them is also a non-negligible energy cost at scale.

But yeah, there's way worse industries out there when it comes to climate change impact.


? Am I misunderstanding the push for nuclear energy and record energy prices in locales with new “data centers”?


Before large models things were starting to move to micro VM, lean hardware, firecracker cloud platforms running thin containers.

Ai buzz and now we are building giga factories. It stands for gigawatt usage, no less target.


Which is why talk about AI datacenters typically involve energy supply constraints, and possibly the need to build power plants along with it.

It is, of course, because it barely uses any energy.


> AI is one of the causes that climate change is accelerating, which is another in a long list of reasons to hate it.

If you want to point at causes of climate change, look no further than adtech. It's the driving force behind our overconsumption.

And it has perhaps an even longer list of reasons to hate it.


AI and Adtech are the same damn industry


People sure don't care about it anymore and it coincided with rise of AI. There's barely any mention of climate change compared to 5+ years ago. I really think this is all about how to keep the capitalist system from imploding because of so much debt (so the next big thing needs to happen to keep the growth).


climate change was an important issue when they were trying to peddle EVs and solar.


They == the lizard people, I assume?


VCs now that AI will solve everything don't need too worry about climate change!!?!


Seeing this kind of populist misinformation/bikeshedding on HN is particularly disappointing.


So then explain to me where I wrote misinformation?


The EPA repealed its 2009 conclusion that greenhouse gases warm the Earth and endanger human health and well-being.

So this is not a good reason to oppose AI. Now the sheer energy it requires does mean we might want to go nuclear though.

Natural gas is nice though because it does pollute the air far less than coal.

You might argue the EPA only repealed that because of political agendas, but the same argument could be made for why it was passed.

A lot of people got very rich off the fear mongering from climate alarmists.


Hmm, it seems pretty clear that climate is getting hotter, so it seems natural for some people to be worried about what will happen to the planet in a few decades (me for one).

And, you may be right, it may not be that big a deal and that we're being alarmists, but it seems like we currently have the tools to slow it down greatly. Why not be on the safe side and use them?

... but to be honest, guessing my opinion won't sway you in any way, still thought I'd try. thanks!


It’s really about the cost/benefit analysis.

The value of plowing ahead and using more energy is worth far more than making sure Florida doesn’t lose some coastline.

The presumptions I see that annoy me with the alarmists, is that they completely negate human agency and ingenuity, and they ignore the economic cost of many of the proposed plans.

Natural gas is far better than coal and should be encouraged rather than condemned. Nuclear power is best of all, is the cleanest and safest energy, and yet is hardly ever the first choice of the alarmists.

I’d rather spend double the energy unlocking breakthroughs in science with the help of AI, and address the problems when they come. I don’t go out of my way to lower my “carbon footprint”, but I also don’t just do things that are wasteful and deliberately harmful to the environment.

AI making us forget how to think for ourselves is a far bigger risk to mankind than climate change. Thanks.


Agree that you need to balance costs with benefits, but nowadays, solar and wind are often the cheapest options (southern states or states with lots of wind). And nuclear is an option that even some staunch environmentalists support these days.

Yeah, don't think most people who support battling climate change are extremists. We just believe it's a big problem, and, to put it in monetary terms, having to deal with major changes in climate could cost the world tens of trillions of dollars by some scientist predictions. Yeah, it's like any problem, doing relatively small fixes now could save enormous amounts of time and money later down the line. Seems like it would probably good usage of our efforts.


Yeah I’m all for doing what makes sense.

I probably just overreact and judge too quickly certain statements from my experiences of people who act like I’m destroying the earth because I have more than 3 kids.

I appreciate reasonable people though, and I should not assume, everyone is a crazy alarmist because they have any concern, so I apologize.


Thanks, much appreciated:) ...

... and not just giving you lip service, but I do find the far left to have gone too far themselves (am a moderate independent myself). They're assuredness that everything they believe is the only correct way to think is frustrating (they are often the least understanding). Yeah, it seems if you step out of line and say anything against their beliefs, you're apart of the far right.

But, feels like things are shifting back to the middle for various reasons. Think this is a good trend


> AI is fine. The hype is annoying.

I'm finding the detractors worse than the hype, because it seems like a certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?) and then never updated their opinions since then. They'll say things like "why would I want to consume X amount of energy and Y amount of water just to get a wrong answer?"

In other words, the people who think generative AI is an absolutely worthless and useless product are more annoying than the ones that think it's going to solve all the world's problems. They have no idea how much AI has improved since it reached center stage 3 years ago. Hallucinations are exceptionally rare now, since they now rely on searching for answers rather than what was in its training data.

We got Claude Desktop at work and it's been a godsend. It works so much better to find information from Confluence and present it to me in a digestible format than having to search by hand and combing through a dozen irrelevant results to find the one bit of information I need.

[0] For the purpose of this comment, this subset is meant to be detraction based on the quality of the product, not the other criticisms like copyright/content theft concerns, water/energy usage, whether or not Sam Altman is a good person, etc.


Follow closely on what the detractors say. Most of them are using AI themselves and are just pushing back on the hype or other ludicrous claims and that's a good thing. Is the current crop of Gen AI anything near AGI? Is it worth the current valuation? Can a company fire most staff and run on gen AI? We may see the economy completely crash and not because AI takes over but because of bad investments, hype and greed.


The same detractors I know today that use AI, said that LLMs were useless slop generators that would never amount to anything just a year or two ago.

Detractors, doomers, and techno-pessimists have got to be the most consistently wrong group in history. https://pessimistsarchive.org/


I've made tens of thousands of dollars so far by day trading puts on NVDA. As a detractor et al, at least I'm putting my money where my mouth is eh


I think the techno-pessimists were right about NFTs.


Everyone and their grandma weren't using NFTs to get real work done.


I don't think it's worthless. It can greatly speed up coding. And learning foreign languages. And many other things.

But I do think humanity is worse off because of it. So I'm a detractor in that way. :)


> Hallucinations are exceptionally rare now, since they now rely on searching for answers rather than what was in its training data.

Well, I wouldn't go that far, but the hallucinations have moved up to being about more complicated things than they used to be.

Also, I've seen a few recent ones that "think" (for lack of a better word) that they know enough about politics to "know" they don't need to search for current events to, for example, answer a question about the consequences of the White House threatening military action to take Greenland. (The AI replied with something like "It is completely inconceivable that the US would ever do this").


> a certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?) and then never updated their opinions since then.

On the contrary. I update my opinion all the time, but every time I try the latest LLM it still sucks just as much. That is why it sounds like my opinion hasn't changed.


> certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?)

I mean, you can get mad at people you made up in your head, that's a thing people do, but this caricature falls in the same comforting bucket as "anyone who doesn't like <thing I like> is just ignorant/stupid" and "if you don't like me you're just jealous".

Maybe non-straw people have criticisms that aren't all butterflies and rainbows for good reasons, but you won't get to engage with them honestly and critically if you're telling yourself they're just ignorant from the start.

For example, I will bet that non-straw people will take issue with this, and for good reasons:

> Hallucinations are exceptionally rare now


I personally believe that LLMs have advanced immeasurably since ChatGPT came out, which was itself a world-historical event. I use AI daily in ways that enhance my productivity.

I say all of that to establish that I'm not a reflexive critic when I tell you, hallucinations are absolutely not exceptionally rare now. On multiple occasions this week (and it's only Tuesday!) I've had to disprove a LLM hallucination at work. They're just not as fun to talk about anymore, both because they're no longer new and because straightforward guardrails are effective at blocking the funny ones.


This very comment is measurably more harmful than any AI criticism that annoys you - someone will read this and assume it's appropriate to accept whatever bullshit Claude generates at face value, with terrible consequences.

In contrast, what harm do those detractors cause? They don't generate as much code per hour?


By that logic we should all live in air-filtered bubbles. Anyone denying this is causing harm. After all, people might die if you let them out of their air-filtered bubble!

The "harm" (if you can call it that) is clear, detractors slow the pace of progress with meaningless and incorrect hand-wringing. A lack of progress harms everyone (as evidenced our amazing QoL today compared to any historical lens.)


that’s a stretch and taking a measured approach to change is valid


> detractors slow the pace of progress

Considering our climate, political and economic situation, I'd say not only is slowing the pace of progress not harmful, it's actually imperative for our long-term survival.


That's a pretty poor straw man - the issue is the amount of harm caused, not that there is a potential for some minuscule amount.

Also we need detractors because if we race into any technological advance too quickly we may cause unnecessary harm. Not all progress is without harms, and we need to be responsible about implementing it as risk-free as possible.


You do realize though that using Claude Desktop to "search" through confluence is like paying a world class architect on the hour just to give you some tips on how to layout your small loft to maximize sunlight.

This is such a perfect example of the mania behind this rollout.

There's no way you can make the financials work here compared to JetBrains spending the same millions spent on AI infrastructure and instead building better search in Confluence. Confluence search SUCKS, but that's just a lack of focus (or resources) on building a more complex, more robust solution. It's a wiki.

Either way, making a more robust search is a one time cost that benefits everyone. Instead, you're running a piece of software that goes directly to Anthropic's bank account, and to the data centers and to hyper scalers. Every single query must be re-run from scratch, costing your company a fortune, that if not managed properly will come out of spending that money elsewhere.


And what is using Confluence in the first place? Your MacBook Pro is faster than a supercomputer from 20 years ago. As we make compute cheaper, we find ways to use it that are less efficient in an absolute sense but more efficient for the end user. A graphical docs portal like Confluence is a hell of a lot easier to use than EMacs and SSH to edit plain text files on an 80 character terminal. But it uses thousands of times more compute.

It seems ridiculous right now because we don’t have hardware to accelerate the LLMs, but in 5 years this will be trivial to run.


I'm confused by your analogy. A wiki run server is extremely efficient to run, and can be hosted from a tiny little raspberry pie. A search engine can be optimized to provide results near O(1). You can even pull up and read results on a very old computer. All of the concerns around cost and resource efficiency can be addressed as all of this is a solved problem.

Even with an LLM agent getting cheaper to run in the future, it's still fundamentally non-deterministic so the ongoing cost for a single exploration query run can never get anywhere near as cheap as running a wiki with a proper search engine.


> You do realize though that using Claude Desktop to "search" through confluence is like paying a world class architect on the hour just to give you some tips on how to layout your small loft to maximize sunlight.

If I could pay a world class architect $1.50 to give me tips on how to maximize sunlight in my loft I would.

Would it be nice if confluence just had a robust search that had a one time cost and then benefited everyone thereafter? Sure, but that's not the current reality, and I do not have control over their actions. I can only control mine.


On reddit there are two sub-Reddits that are mirrors, /accelerate and /betteroffline. The people in the subs go there for dopamine hits. One for how AI is going to transform their lives and lead to a work-free future. The other how AI is worthless and how everyone (except them) is being fooled. They are the same people with opposite views. The people in either sub don't recognize this.


This is going to sound flippant, but truly, I imagine most people find the group that disagrees with their take annoying as well.


The detractors are a lot less numerous and certainly a lot less preachy than the ones on the hype train.

AI is alright. It's moderately useful, in certain contexts it speeds me up a lot, in other contexts not so much.

I also think that the economics of it make no sense and that it is, generally, a destructive technology. But it's not up to me to fix anything, I just try to keep on top of best practices while I need to pay bills.

The economics bit is not my problem though. If all AI companies go bust and AI services disappear I can 100% manage without it.


> The economics bit is not my problem though. If all AI companies go bust and AI services disappear I can 100% manage without it.

We're in "too big to fail" territory, if we handled the recession we were heading towards/in years ago, instead of letting AI hype distract and redirect massive amounts of investment, attention and labor from elsewhere, we might have been in a better position.


On the flip side, if all this slop is floating around, and AI services do become untenable, think of all the immediate jobs that will open up to fix and maintain all the slop that's being thrown around right now. The millions of dollars of contracts spent to use these LLMs will be redirected back to hiring.

Though, my cynical take is that the investor class seemed dead-set on forcing us all to weave LLMs deep into our corporate infrastructures in a way that I'm not too sure it will ever "disappear" now. It'll cost just as much to detangle it as it was to adopt it.


> Hallucinations are exceptionally rare now

The way we talk about "hallucinations" is extremely unproductive. Everything an LLM outputs is a hallucination. Just like how human perception is hallucination. These days I pretty much only hear this word come up among people that are ignorant of how LLMs work or what they're used for.

I've been asked why LLMs hallucinate. As if omniscient computer programs are some achievable goal and we just need to hammer out a few kinks to make our current crop of english-speaking computers perfect.


Claude Opus 4.6 regularly makes up shit and hallucinates. I'm not a detractor by any means but "exceptionally rare" is fantasyland.


Can vouch for this, plus, when it does work, stuff can take forever. Then, if I let it unsupervised, higher risk of doing the wrong thing. If I supervise it, then I become agent nanny.


I have been experiencing it too.

I honestly am finding Codex considerably better, as much as I despise OpenAI.


I use the latest codex with gpt5.4 and Claude opus every day. they hallucinate every day. If you think they don't, you are probably being gaslighted by the models.


It's a hail mary dash towards AGI. If we get computers to think for us, we can solve a lot of our most pressing issues. If not, well we've accelerated a lot of our worst problems(global warming, big tech, wealth inequality, surveillance state, post-truth culture, etc).


> If we get computers to think for us, we can solve a lot of our most pressing issues

If AGI is born from these efforts, it will likely be controlled by people who stand to lose the most from solving those issues. If an OpenAI-built AGI told Sam Altman that reducing wealth inequality requires taxing his own wealth, would he actually accept that? Would systems like that get even close to being in charge?


> It's a hail mary dash towards AGI. If we get computers to think for us, we can solve a lot of our most pressing issues.

All but one of them simultaneously, in fact. The one being left out: wanting to keep existing.


What are you talking about? AGI is practically a prerequisite for transhumanism, and, well, not dying.

If you want to "keep existing" AGI happening is probably your only hope.


Aligned AGI, yes. Unaligned AGI is a fast way to die.

If you want to keep existing, slow down, make sure AGI is aligned first, and go into cryo if necessary.

If you don't want to keep existing, that doesn't mean you get to risk the rest of us.


I highly doubt OP was talking about immortality


This sounds just like the idea that quantum computing will solve a lot of computational issues, which we know isn’t true. Why would AGI be any different?


> If we get computers to think for us, we can solve a lot of our most pressing issues

How, exactly, does more and better tech help with the fundamentally sociological issues of power distribution, wealth inequality, surveillance, etc? Are you operating on the assumption that a machine superintelligence will ignore the selfish orders of whoever makes it and immediately work to establish post-scarcity luxury space communism?


In 2-3 decades 30% of the world population will be over 60 years old (~3 BILLION seniors).We don't have an economic model for it, nor does gen-z want to all be Personal Support Workers while paying rent. Nvidia only makes 6million data center GPUs a year. Huawei makes 900k. We need 10 to 100x more to be able to automate enough just to hold civilization together. Amazon built datacenters with near 0 water use but it used 35% more electricity overall. So tha problem can be solved however we need to change out of the whole scarcity mentality if we're going to actually make the planet nice.


> 30%

Thats not accurate. The estimate is about 2 billion in 25 years.

https://www.who.int/news-room/fact-sheets/detail/ageing-and-...

We also have models for how that works at a country level because we have countries that have far exceeded that.

And the vast majority of 60 year olds are still self sufficient and economically productive.

Average global retirement age is around 65 and in most countries it’s creeping towards 70. And percent of world population over 70 looks much more manageable over the time span we can realistically model.


> incredible amounts of ... energy

So tired of seeing this trope. Data center energy expenditure is like less than 1% of worldwide energy expenditure[1]. Have you heard of mining? Or agriculture? Or cars/airplanes/ships? It's just factually wrong and alarmist to spread the fake news that AI has any measurable effect on climate change.

[1] https://www.iea.org/reports/energy-and-ai/energy-supply-for-...


It's not just the absolute expenditure. It's the type of expenditure.

https://www.selc.org/news/resistance-against-elon-musks-xai-...


[flagged]


Those links are about air pollution, not carbon emissions. You're engaging in some political posturing of your own.


Why are you lying? From literally the first paragraph of the CFR article:

> China is the world’s largest source of carbon emissions, and the air quality of many of its major cities fails to meet international health standards.


But the focus of the article is on China's air quality.

As for carbon emissions: https://news.ycombinator.com/item?id=45108292

And even though China emits more carbon annually than the US today, the US and Europe are still ahead in cumulative emissions: https://ourworldindata.org/grapher/cumulative-co2-emissions-.... Cumulative emissions are the carbon that's already in our atmosphere and causing heating today. If you want to apportion "blame" for climate change, then the US is 25% responsible, Europe is 30% responsible, and China is 14% responsible as of 2023. And India is only 3.6% responsible.

China's high emissions today power a manufacturing industry that has made cheap decarbonization via solar and batteries a realistic prospect. That's a much better use of their current emissions compared to what the developed countries do with theirs.


China has a large population and does the dirty work of manufacturing for much of the rest of the entire world.

China has done more for renewable energy solutions than any other country, and their per capita population consumption patterns for personal are lower than many G20 countries.

In a fair representation of data, the total high carbon dioxide output from China should be assigned to source- the people across the globe with high personal consumption that have off shored their industry to China.


Interesting that you accuse your parent of political posturing at the end of your post, which indeed contains plenty political posturing.


1% of worldwide energy expenditure is massive, incredible amounts of energy in fact.


climate change is a hoax, but it's also disingenuous to pretend like ai delivers even an infinitesimal amount of the value of either agriculture or mining. Global population approaches zero without either of those things and if you deleted ai, no one would ever notice.


[flagged]


No, it’s… fine. Useful in a limited capacity. Not the machine god, but not machine Satan either. The reality is kind of boring.


This summarizes mostly how I feel about it. It's a tool like any other tool we have advanced since the beginning of human civilization

Machine tools replaced blacksmiths

CNC machines replaced manual machines.

Robots replaced CNC machine tenders

CAD replaced draftsman (and also pushed that job onto engineers (grr))

P&P robots replaced human production lines.

The steam train replaced the horse and cart

This is a tale as old as time itself


What do LLMs replace, pray tell? More like moving from a screwdriver to a drill, rather than replacing the carpenter all together.

Also note that there are inventions that may “replace” some part of a process, but actually induce a greater demand for labor in that process. Take the cotton gin, for example, which exploded the number of slaves required to pick cotton.


Those were deterministic rather than stochastic


Exactly. People love to compare LLMs to power tools for carpenters and smiths. But if my miter saw had a 20% chance to produce cuts at a 45 degree angle when I have it set for 90, I would throw it out so fast I would leave Looney Tunes style tracks. A tool which only sometimes does its job is worse than no tool at all.


To be rather pedantic your miter saw probably doesn't cut exactly 90 degrees. Especially if you reset it. LLMS are low accuracy for sure but so are humans. I am not saying AI is going to replace us all in a whole entirety my broader point is that these tools will be another tool that changes the market share of jobs.


The low accuracy of human guesstimation is why we use deterministic tools, not so-called tools that imitate our ability (or worse).

Tools are not replacements for people! Tools are enhancements.

AI is an attempt to replace people with something unhuman.


This isn't even our first AI hype cycle. That happened in the late 70s-80s. Every lab and agency needed Lisp machines to teach computers how to identify Russian missiles—or targets. The "GOFAI" techniques did not live up to the expectations of them, but they settled into niches where they were tremendously useful, and life went on. The same will happen with today's matmul-as-a-service AI.


I don't see the threat from AI as capitalist at all, but more so feudalist. I mean, if things go in the direction of the worst-case scenario. It seems like the power potential transcends the problems of capitalism entirely.

But for now it's strictly hypothetical. Nothing I'm doing with AI matters enough to really make any statements about a broader scale in my field, let alone in entire economies.


Capitalism is feudalism but with raw generational wealth instead of generational wealth with divine right characteristics.


I see some overlap, but I think it's more complex than that. If we conflate the two so easily they lose meaning. Certainly, some people have that experience under capitalism. I think there are systemic failures which lead to life experiences that are probably not all that different from some peoples' experiences in feudal society, both at the top and bottom of the hierarchy.

The more I think about it though, I'm not sure feudalism is the right analogy. Serfs had a purpose and were depended upon. In a society where AGI is in the hands of a few, it seems reasonable to believe that there wouldn't be a need for serfs at all. Labour would become utterly irrelevant. You'd have no lord to be bound to. You'd be unnecessary.

I imagine the transition there would be some brutal form of capitalism, but the destination would not be fuedalism. I don't think we have a historical analog for that hypoethical destination.


I see your point, in fact I am against the term Neo-colonialism for this exact reason. Neo-colonialism is bad, but next to the horrors of actual colonialism, it is a walk in the park. And naming economic policies which artificially increases the dependency of a foreign country in your economy, after a policy of mass extraction, neglect, violence, and even genocide really removes the horrors from the latter.

However it has been over 500 years since feudalism. People today are still very much living with the consequences of colonialism, some people are in fact still living under colonial rule (notably in Western Sahara and Palestine). The consequences of feudalism have long passed. I think it is fine actually to conflate the horrors of capitalism with the horrors of feudalism. 500 years ought to be long enough.


Capitalism enables feudalism.

We were supposed to put "breaks" in place, like anti-monopoly laws, but they've never been effective, because capitalists quickly found "that one loophole": bribing politicians


Capitalism is just feudalism that works for the merchant class


If we wanna go full-on Marxist analysis it is an attempt of the capitalist class to finally rid themselves of their dependence on labor and their pesky demands like sick leave and fair wages.

Through that analysis, one can also explain why the managerial caste is so obsessed with it - it is nothing less than an ideological device. One can also see this in the actual deification happening in some VC cycles and their belief in AGI as some sort of capitalist savior figure.

I see the point and don't disagree with it, but I find that framing is not the most compelling to the audience here...


Yeah. Oftentimes get crickets here when I talk along those lines. Can't tell if apathy, learned helplessness, or obliviousness. Regardless, devs seem like an extremely docile labor group based on how they react to this and other economic pressures.


We will all be shocked at the rug pull after it has finished training on all our high-quality feedback for code it has written.


This is correct at the firm level and breaks down at the aggregate level, which is where it gets interesting.

At the firm level, automating away labor costs is obviously rational. But capital in aggregate can't actually rid itself of labor, since labor is where surplus value comes from. A fully automated economy would be insanely productive and generate basically no profit. So the capitalist class pursuing this logic collectively is, without knowing it, pursuing the dissolution of the system that makes them the capitalist class.

You don't have to buy any of that to notice the more immediate mechanism though: AI doesn't need to actually replace workers to discipline them. The credible threat of replacement is enough to suppress wages, justify restructuring, and extract more from whoever's left. That's already happening and requires no AGI.


It's the other side around. AI developed in a Marxist society could be a useful tool. AI developed in a capitalistic society will be a tool for control and enrichment of the few.

You just have to look at 99% of what AI is used for today: disinformation, and creating fake porn videos


AI is more likely to destroy capitalism than it is to increase inequality.

Ten years ago, what would it have cost you to build a Jira clone / competitor? Today one person can do it in a week, at least for the core tech.

In a year, only the very largest companies will pay for that kind of infrastructure tooling.

We’ve just started seeing the democratization of software and the capitalists are terrified.


It won't. Because once big companies who own the models and the GPUs will see that you're competing with them, they will lock you down and extort funds from you. That's what modern capitalism is about


I just don't know how to explain that you won't be destroying capitalism with AI. You have a subscription.


I don’t know how to explain that the difference between barter systems, stored value, and truly communal property has nothing to do with capitalism.

People pay for things in all economic models. It’s bizarre to think that means everything is capitalism.


How did HN become this kind of website?


Because AI is attacking, plagiarizing, competing with, and destroying the most common industry of people here on HN, so suddenly it mattered more to people who were previously unaffected.

Some people have been concerned with this kind of politics all along. Some people are realizing they should be now, because of AI. And that's okay; both groups can still work together.


I went to a conference and people were suggesting nationalizing AI companies so it's basically everywhere.


Same way we turned internet into a public utility? Wait, did we do that?


The parent comment is a pretty measured take. What’s your problem with it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: