Because it's internet + social media. You should assume 60% of it is made up, every time. People are either saying things they know to be untrue, or things they think are true but or not.
Interesting, I'll be sure to check it out. It sounds pretty similar to the tool I built which lets you edit a "plan" in a text editor to assign commits to feature branches - the plan is saved so it can be amended continuously.
True, thanks for sharing. Worth mentioning that's on the "full-stack" part of the framework. It doesn't impact most React website while it impacts most next.js websites.
Thanks, that's what I acknowledged in the message you just replied to.
I'm not blaming anyone. Mostly outlining who was impacted as it's not really related to the front-end parts of the framework that the initial comment was referring to.
> Can it already vertically and horizontally center unknown-beforehand-length multi-line text in a single html element, just like non-CSS table cells could already in 1995?
Non-CSS table cells have never been able to do that – you need a wrapping <table> at minimum for browsers to render it how you want, a <tr> for it to be valid HTML, and <tbody> comes along for the ride as well as an implied element. So that’s four elements if you want to centre vertically with <td> or <th>. If you wait until the year 2000, then you can get that down to three elements by switching from HTML to XHTML because <tbody> is no longer implied in XHTML.
CSS, on the other hand, has been able to do what you want since 1998 (CSS 2) with only two elements:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<title>.</title>
<style type="text/css">
html,
body {
height: 100%;
}
.outer {
display: table;
width: 100%;
height: 100%;
}
.inner {
display: table-cell;
vertical-align: middle;
text-align: center;
}
</style>
<div class="outer">
<div class="inner">
Test<br>
Test<br>
Test<br>
Test<br>
Test<br>
Test
</div>
</div>
(I’m using a <style> element here for clarity, but you can do the same thing with style attributes.)
Of course they are. The two things aren’t contradictory at all, in fact one strongly implies the other. If AI is writing 90% of your code, that means the total contribution of a developer is 10× the code they would write without AI. This means you get way more value per developer, so why wouldn’t you keep hiring developers?
This idea that “AI writes 90% of our code” means you don’t need developers seems to spring from a belief that there is a fixed amount of software to produce, so if AI is doing 90% of it then you only need 10% of the developers. So far, the world’s appetite for software is insatiable and every time we get more productive, we use the same amount of effort to build more software than before.
The point at which Anthropic will stop hiring developers is when AI meets or exceeds the capabilities of the best human developers. Then they can just buy more servers instead of hiring developers. But nobody is claiming AI is capable of that so far, so of course they are going to capitalise on their productivity gains by hiring more developers.
If AI is making developers (inside Anthropic or out) 10x more productive... where's all the software?
I'm not an LLM luddite, they are useful tools, but people with vested interests make a lot of claims that if they were true would result in a situation where we should already be seeing the signs of a giant software renaissance... and I just haven't seen that. Like, at all.
I see a lot more blogging and influncer peddling about how AI is going to change everything than I do any actual signs of AI changing much of anything.
How much software do you think happened at Google internally during its first 10 years of existence that never saw outside light? I imagine that they have a lot of internal projects that we have no idea they even need.
> The two things aren’t contradictory at all, in fact one strongly implies the other. If AI is writing 90% of your code, that means the total contribution of a developer is 10× the code they would write without AI. This means you get way more value per developer, so why wouldn’t you keep hiring developers?
Let's review the original claim:
> AI will replace 90% of developers within 6 months
Notice that the original claim does not say "developers will remain the same amount, they will just be 10x more effective". It says the opposite of what you claim it says. The word "replace" very clearly implies loss of job.
> > AI will replace 90% of developers within 6 months
That’s not the original claim though; that’s a misrepresentative paraphrase of the original claim, which was that AI will be writing 90% of the code with a developer driving it.
> People in the comments seem confused about this with statements like “greenest AI is no AI” style comments. And well, obviously that’s true
It’s not true. AI isn’t especially environmentally unfriendly, which means that if you’re using AI then whatever activity you would otherwise be doing stands a good chance of being more environmentally unfriendly. For instance, a ChatGPT prompt uses about as much energy as watching 5–10 seconds of Netflix. So AI is greener than no AI in the cases where it displaces other, less green activities.
And when you have the opportunity to use human labour or AI, AI is almost certainly the greener option as well. For instance, the carbon emissions of writing and illustrating are far lower for AI than for humans:
> Our findings reveal that AI systems emit between 130 and 1500 times less CO2e per page of text generated compared to human writers, while AI illustration systems emit between 310 and 2900 times less CO2e per image than their human counterparts.
I think the actual answer is more nuanced and less positive. Although I appreciste how many citations your comment has!
I'd point to just oe, which is a really good article MIT's technology review published about exactly this issue[0].
I'd make two overall points firstly to:
> when you have the opportunity to use human labour or AI, AI is almost certainly the greener option as well.
I think that this is never the trade off, AI normally generates marketing copy for someone in marketing, not by itself, and even when if it does everything itself, the marketing person might stop being employed but certainly doesn't stop existing and producing co2.
My point is, AI electricity usage is almost exclusively new usage, not replacing something else.
And secondly on Simon Wilison / Sam Altman's argument that:
> Assuming that higher end, a ChatGPT prompt by Sam Altman's estimate uses:
>
> 0.34 Wh / (240 Wh / 3600 seconds) = 5.1 seconds of Netflix
>
> Or double that, 10.2 seconds, if you take the lower end of the Netflix estimate instead.
This may well be true for prompts, but misses out the energy intensive training process. Which we can't do if we actually want to know the full emmisions impact. Especially in an environment when new models are being trained all the time.
On a more positive note, I think Ecosia's article makes a good point that AI requires electricity, not pollution. It's a really bad piece of timing that AI has taken off initially in the US at a time when the political climate is trying to steer energy away from safer more sustainable sources, and towards more dangerous, polluting ones. But that isn't an environment thay has to continue, and Chinese AI work in the last year has also done a good job of demonstrating that AI trainibg energy use can be a lot kess than previously assumed.
> AI normally generates marketing copy for someone in marketing, not by itself, and even when if it does everything itself, the marketing person might stop being employed but certainly doesn't stop existing and producing co2.
Sure, but it does it a lot quicker than they can, which means they spend more of their time on other things. You’re getting more work done on average for the carbon you are “spending”.
Also, even when ignoring the carbon cost of the human, just the difference in energy use from their computer equipment in terms of time spent on the task outstrips AI energy use.
> This may well be true for prompts, but misses out the energy intensive training process.
If you are trying to account for the fully embodied cost including production, then I think things tilt even more in favour of AI being environmentally-friendly. Do you think producing a Netflix show is carbon-neutral? I have no idea what the carbon cost of producing, e.g. Stranger Things is, but I’m guessing it vastly outweighs the training costs of an LLM.
I’ll readily admit that I don’t know the first thing about television production, but that doesn’t seem plausible to me. Moving lots of physical objects around takes far, far, far more work than shuffling bits, and a large proportion of that can’t come from sustainable energy sources. Think about things like flying the cast to shoot on location in Lithuania, for instance. Powering and cooling servers isn’t in the same ballpark.
Not for nothing but the vast majority of people doing the kind of work that’s done on TV & Film just-so-happen to be geographically co-located for some reason.
It’s possibly worth noting that both activities require humans and even fully operational end-to-end supply chains of rare-earth minerals and semiconductor fabrication. Among many, many other things involved.
I just don’t think we can freely discount that it takes heavy industrial equipment and people and transport vehicles to move and process the raw materials to make LLM/AI tech possible, and that the … excitement has driven those activities to precipitous heights. And then of course transporting refined materials, fabricating end products, transporting those, building and deploying new machines in new data centers around the world, massively increasing global energy demand and spiking it way beyond household use in locales where these new data centers are deployed. And so on and so forth.
I suspect that we will find out someday that maybe LLMs really are more efficient, possibly even somehow “carbon negative” if you amortize it across a long enough timespan—but also that the data will show, for this window of time, that it was egregiously bad across a full spectrum of metrics.
It's this completely unfounded barrage of making shit up about energy consumption without any tether to reality that makes the whole thing with complaining about energy use seem just like a competition on who makes up the most ridiculous most hand-wringing analogy.
Glad to see someone refute the AI water argument, I'm sick of that one. But I do not see how the displacement argument fits. Maybe you can elaborate but I don't see how we can compare AI usage to watching Netflix for any length of time. I can't see a situation where someone would substitute watching stranger things for asking chatGPT questions?
The writing and illustrating activities use less energy, but the people out there using AI to generate ten novels and covers and fire them into the kindle store would not have written ten novels, so this is not displacement either
Well the point was that chatgpt doesn't use so much energy. I thought we were comparing similar activities. If we're just going to list things a bored kid can do instead of watch Netflix we'll be here all day, and most of them use less electricity than chatGPT.
> And when you have the opportunity to use human labour or AI, AI is almost certainly the greener option as well. For instance, the carbon emissions of writing and illustrating are far lower for AI than for humans:
Do you plan on killing that person to stop their emissions?
If you don't use the AI program the emissions don't happen, if you don't hire a person for a job, they still use the carbon resources.
So the comparison isn't 1000kg Co2 for a human vs 1kg Co2 for an LLM.
It's 1000kg Co2 for a human vs 1001kg Co2 for an LLM.
> For instance, the emission footprint of a US resident is approximately 15 metric tons CO2e per year22, which translates to roughly 1.7 kg CO2e per hour
Those 15,000kg of CO2e are emitted regardless of that that person does.
The article also makes assumptions about laptops that are false.
>Assuming an average power consumption of 75 W for a typical laptop computer.
Laptops draw closer to 10W than 75W, (peak power is closer to 75W but almost not laptops can dissipate 75W continually).
The article is clearly written by someone with an axe to grind, not someone who is interested in understanding the cost of LLM's/AI/etc.
It says that ignoring the human carbon use, just their computer use during the task far outweighs the AI energy use. So your response “are you planning on killing the human?” makes zero sense in that context. “They are wrong about the energy use of a laptop” makes more sense , but you didn’t say that until I pushed you to actually read it.
75W is not outlandish when you consider the artist will almost certainly have a large monitor plugged in, external accessories, some will be using a desktop, etc. And even taking the smaller figure, AI use is still smaller.
The human carbon use is still relevant. If they were not doing the writing, they could accomplish some other valuable tasks. Because they are spending it on things the AI can do, somebody else will have to do those things or they won’t get done at all.
75w is nuts actually. I measured my _desktop_ setup about 10 years ago including two monitors and idle was around 35w. It also doesn't make sense to include idle of all peripherals since you would be using them for chatgpt as well.
> Yeah, now they are part of Anthropic, who haven't figured out monetization themselves.
Anthropic are on track to reach $9BN in annualised revenue by the end of the year, and the six-month-old Claude Code already accounts for $1BN of that.
Not sure if that counts as "figured out monetization" when no AI company is even close to being profitable -- being able to get some money for running far more expensive setups is not nothing, but also not success.
Monetisation is not profitability, it’s just the existence of a revenue stream. If a startup says they are pre-monetisation it doesn’t mean they are bringing in money but in the red, it means they haven’t created any revenue streams yet.
I built something very similar a few months back and I just asked an LLM. You could optionally specify a CSS selector for HTML or JMESPath for JSON to narrow things down, but it would default to feeding the entire textual content to the LLM and just asking it the question with a yes or no response.
reply