What is the "ELI5" summary of the practical limits & scaling laws that govern robotics?
The current "futurist" vision is one of humanoid robots taking over many/most jobs done by humans today, but - as someone that routinely hires human welders & assemblers - the dexterity required for most ad-hoc tasks seems many many decades (if not more?) away from what I see robots do--yes, even the fancy chinese jumping ones.
This has led me to think one of two things:
1. The robotics revolution will not come. It's predicated on the idea that advances in robotics will follow a curve of the same shape as advances in compute/ai, which will not happen. OR...
2. There has been some paradigm-shift or some breakthrough that has put robotics improvement on a new curve.
To an outsider, what I see in robots is not categorically different than like, the sony AIBO dog in 1999. It's significantly better of course, but is it really that different? (Whereas what we can do in compute-land today is categorically diffrent because of the transformer model breakthrough).
So:
1. Have there been any breakthroughs that would lead us to believe that a robot will be able to like, look under a table to adjust a screw?
2. What are the scaling laws & practical limits to present-day robotic dexterity? Is it materials? Energy density? What?
3. What is the real rate of improvement along these key dimensions? Are robots improving linearly? Geometrically? Exponentially?
4.Or should I keep discounting robotics until we get our first robots that are made of meat? That I'd believe would result in exponential change!
From your early point -- both 1) and 2) are true. True human level dexterity is ver far (few decades surely), it would require further advancements in hardware, learning approaches etc. Recent approaches provide a glimmer of hope and maybe we can have some intermediate robots -- to be honest even waymo's and tesla's are robots and we will see much more of such robots with vision, working with humans etc. in narrow settings - chinese dancing robots are examples of this.
I'm fairly ignorant on this but robots that are teleoperated seem completely capable of doing basically any household task and using tools like screw drivers. It may be slow, sure, but autonomy and speed seems like a solvable software problem.
There's also endless welding and assembly robots and have been for a long time. Sure they're huge and weight 3 tons or whatever, but it's not like we're building humanoid robots to do work like that anyway.
Consider 2 welding systems, a hungover human on a 3 legged ladder with a scratched up welding helmet doing an overhead TIG weld holding the filler rod a foot away from the weld pool, and a 6 DOF Kuka bot doing a weld in the same position on a completely rigid work piece clamped down to a precision machined fixture table which is clamped down to a precision machined floor that the robot is also mounted to.
The human system weighs 250lbs and can be placed anywhere. Let's ask what it takes to walk the factory robot in that direction. First let's have the work piece be moving, let's say on a conveyor belt. The old robotics way of thinking would be to introduce this variable into the programming of the bot/station, create simple sensors for either the work piece or conveyor itself to indicate to the programming loop where the part is with as little error as possible, and continue to keep accuracy while maintaining as much precision as possible using rigidity (which equals mass and space). Now the whole system is functionally 7 DOF, and you add in the error and failure modes of the 7th DOF (the conveyor system) and accumulate some error. Now just imagine instead of a conveyor the part is on a rolling table with random Z height, and so it the robot arm, and you can see this will fall apart, you can't fight this battle with deterministic programming, machining precision, and rigidity. Obviously if you extended this system to be a humanoid robot on a 3 legged ladder which would be 30+ DOF between the weld and the ground, it couldn't possibly work.
But back to the hungover human, why does this system work so well? The human has very good eyes and a very good internal IMU. They are looking at the end of the filler rod and the weld pool, and even though the information isn't that good coming through the scratched welding helmet, they can compensate for all that error and run an internal function that holds the torch and filler rod in the optimum position to do a good TIG weld while ignoring or automatically adjusting for tons of other variables. Now to address your original question, in our system
1. Are current cameras good enough to get an equivalent amount of information about the weld that the hungover welder has? Yes, in fact can get more information than a human can
2. Are IMUs as good as a hungover human has? Hard to really know, but seems like it, though if you need many IMUs attached to different limbs on a robot its probably not as good as humans yet
3. Is the power density of actuators and power storage good enough to approximate this 250lb system of a human on a ladder with some combination of DOF that reaches a sufficient range of motion to emulate the humans hands (whether the robot looks like a human or not?) - yeah, plus in this case the welder is plugged into the ground for the human anyway so that system is already attached to mains power
So given all this, seems like the limiter is just software, which is the bull case for this prospected robotics revolution
If by "survival" you mean surviving against a bloodthirsty regime that killed 10,000 people in January alone, then yes: the people of Iran are fighting for survival.
That's pure Israeli propaganda, and as you see there is absolutely no "up rising" from Iranian citizens. They are however, uniformly against Israel and the US given that we started this illegal war by bombing a girls school and murdering over 170 children. Much like Israel has been doing since its creation in 1948.
"there is absolutely no "up rising" from Iranian citizens"
This is an extremely bold lie. There have been many uprisings by Iranians against their horrible government that are extremely brutally suppressed by said government.
Iranian protesters will be treated as enemies if they support Tehran's foes, the country's top police officer warned, as the Middle East war sparked fears mass anti-government rallies could reignite.
"If anyone comes forward in line with the wishes of the enemy, we will no longer see them as merely a protester, we will see them as an enemy," said national police chief Ahmad-Reza Radan in comments aired by state broadcaster IRIB late on Tuesday.
"And we will do to them what we do to an enemy. We will deal with them in the same way we deal with enemies," he added.
"All our forces are also ready, with their hands on the trigger, prepared to defend their revolution."
His warning comes after the government cracked down on anti-government protests in January, sparked a month before over economic grievances in the sanctions-hit country.
The authorities deemed the protests to be "riots" and Radan at one point issued an ultimatum to protesters to hand themselves in or face the full force of the law.
Iranian authorities acknowledge more than 3000 deaths in the unrest, including members of the security forces and bystanders, but say the violence was caused by "terrorist acts" fuelled by Iran's enemies.
The US-based Human Rights Activists News Agency (HRANA), however, has recorded more than 7,000 killings in the crackdown, the vast majority protesters, though the toll may be far higher. More than 50,000 have been arrested, it says.
US President Donald Trump had initially cheered on the protesters, threatening to intervene on their behalf as authorities launched a deadly crackdown, but his threats soon shifted to Iran's nuclear programme.
Washington launched strikes with Israel on Iran on February 28, sparking retaliatory strikes by Tehran against Israel and US bases across the Gulf region.
What do YOU consider to be "first party sources" for Iran?
Why it is OK for Iran to be an Islamic Theocracy but wrong for Israel to be a Jewish State? This is a pretty big double standard. You use the word Zionist like a slur.
A first party source, means directly from the originating party. In this case, there is no statement from Iranian officials that exists. Just 2nd hand quotes. Other first party sources would be photographic evidence, videos... like what we have of Zionist crimes in Gaza, the West Bank, Iran, Lebanon...
Iran is inhabited by indigenous people. Israel is ruled by colonizers. That is the difference. Another huge difference that impacts me as a US citizen is that Iran has zero influence on my government and there are zero laws removing my rights in favor of Iran, which is the exact opposite of Israel.
As a US citizen you should be aware that the US was stolen from the Native Americans the exact same way you believe that Israel was stolen from the native population.
Iran was violently colonized by Arabs. The history of most countries contain many wars over land yet only Israel gets focused on. So strange.
And the only reason Iran is Shia is because it was converted by force
The Muslim conquest of Persia or Arab conquest of Iran occurred between 633 and 651, when the Rashidun Caliphate under Umar conquered the Sasanian Empire as part of the early Muslim conquests, which began under Muhammad in 622.
Yes, and just like the Natives had the Pueblo Revolt uprising, the Palestinians also resist. If it was 1570, I would absolutely support Indigenous people resisting their land being stolen and their own genocide.
What difference does the year make? Why do you think that the Palestinians have the right to violently fight for their land but not native Americans? It is a huge double standard. The harsh truth is that the Palestinians can't get their land back by force and all their efforts at doing so have made their lives much much worse.
"Tu quoque" comments don't help any discussion. Both the US and Iran are wrong. Iran is wrong for the massacre and the US is wrong for starting a war with Israel against Iran. They are wrong for different reasons. Iran is run by a despotic regime and the US has lost the plot. Possibly Trump is trying to deflect from the Epstein files and create a rally around the flag effect as the midterms approach and doesn't have better ideas to get the public on side. Israel is in the wrong here too.
"Any conversation about token costs devolves into an ad-hoc, informally-specified, bug-ridden implementation of half of generally accepted accounting principles."
We have a way of determining if Anthropic is, or has the capability of being profitable, and what the levers to that may be. AI may be world-changing, but the accounting principles behind AI labs are no different than those behind a Pizza Hut.
Even if the cost of "inference + serving" is lower than the cost of selling a token, the relevant question is what is the depreciation schedule of the cost of training. ie, if I spend $1 on training, how long do I have before I have to spend $1 again?
Almost certainly, any reasonable depreciation schedule of the cost of training will result in leading labs being presently wildly unprofitable. So the question is:
What can be done to make training depreciate more slowly? Perhaps users can be persuaded to stick around using non-fronteir models for longer, although then there's a shift in the competitive landscape.
If users cannot be persuaded (forced?) to use legacy models, then the entire business model is thrown into question, because there's no reason why training frontier models would ever get cheaper: even if it gets cheaper on the margin, surely that will result in more compute used to generate an even "better" model, resulting in more spend in the aggregate.
This doesn't mean that the AI industry is "doomed". A couple things could happen, and this is where the fronteir labs should be focusing their attention:
1. They could find a way to climb up the value chain and capture more of the consumer surplus.
2. There could be a paradigm shift in compute architecture/compute cost.
3. We could reach a limit of marginal utility, shifting consumption to legacy models, thereby lengthening the depreciation/utility of training.
Edit: My assertion of "Almost certainly, any reasonable depreciation schedule of the cost of training will result in leading labs being presently wildly unprofitable." is made with no real information, just a gut feeling, and should not be taken seriously.
Dario has made a specific cohort argument here. His numbers (from various interviews) are: you train a model in 2023 for $100M, deploy it, and it earns $200M over its lifetime. Meanwhile you train the 2024 model for $1B, which goes on to earn $2B. Each vintage returns 2x on its training cost.
However, the GAAP P&L tells the opposite story. You book $200M revenue in the same year you spend $1B training the next model, so you report an $800M loss. Next year you book $2B against $10B in training spend, reporting an $8B loss. The business looks like it's dying when every individual model generation actually generates a healthy profit.
That's actually Dario's answer to your depreciation question. If each cohort earns back its training cost within its natural lifespan (however short that lifespan is), the depreciation schedule is already baked in. The model doesn't need to live forever, it just needs to return more than it cost before the next one replaces it. Whether that's actually happening at Anthropic is a different question, and one we can't answer without audited financials, but it's the claim Dario makes (and seems entirely reasonable from a distance).
GAAP doesn't work here really. the R&D treadmill means you are always betting on next year and its NOT inventory or something you can defer your cost on. It's an upfront R&D expense.
so what happens on year 10 when Anthropic hits a $10B training and only returns $8T? they're cooked
If those numbers are correct, then my assertion that "Almost certainly, any reasonable depreciation schedule of the cost of training will result in leading labs being presently wildly unprofitable." is incorrect.
And I admit that I made that assertion from my gut without actually knowing if it's true or not.
If you have to continually spend greater amounts of money to keep up with the competition on every new model then it is dying.
Every single time a company comes around and goes "Actually GAAP are wrong, look at my new math that says were good" its led to much wailing and gnashing of teeth in the future when it inevitably isnt.
That's an interesting idea. I'm curious, though, are there any other industries and/or companies that have tried to pull this sort of thing off? And what ultimately happened to them?
Enron had a system like this. They regularly worked on large, long term contracts that became profitable over years/decades. They wanted to push rewards forward so would estimate the total value of the contract and book the profit when it closed. Mark-to-market accounting wasn't unheard of the time but using it for assets without an active market was unique. Without the market to make against, the numbers were best guess projections.
The problem is everyone along the line is incentivized to be aggressive with estimate (commissions for sales are bigger, public financials looks better) and discouraged from correcting the estimates when they go wrong.
Estimating multi-year returns on frontier models looks harder than estimating returns on oil and gas projects in the 90s.
Why would anyone use 200M model when 1B model is available? The company increase its bet with each iteration increasing risks. It blow up at some point because it cannot guarantee 2B return after 1B investment.
To GAAP point - 200M or 1B or 10B is not a loss but cash converted into an asset. It won’t affect the bottom line at all. Unless the company re-evaluates the asset and say it now cost 1M instead of 200M. This would hit the bottom line.
He says "You paid $100 million and then it made $200 million of revenue. There's some cost to inference with the model, but let's just assume in this cartoonish cartoon example that even if you add those two up, you're kind of in a good state. So, if every model was a company, the model is actually, in this example is actually profitable. What's going on is that at the same time"
importantly you'll notice that he's talking revenue, and assumes that inference is cheap enough/profitable enough that 100M + Inferance_Over_Lifetime < 200M
> They could find a way to climb up the value chain and capture more of the consumer surplus.
Yes, this is exactly why OpenAI and Anthropic are hyping AGI. If LLMs ever become good enough to replace workers, the first sign will be frontier model companies launching competitor businesses. It doesn't make sense to sell the formula for gold when you can just use it yourself.
> There could be a paradigm shift in compute architecture/compute cost.
Possible, but no signs of this on the horizon. If it does happen, it's impossible to predict when it will.
> We could reach a limit of marginal utility, shifting consumption to legacy models, thereby lengthening the depreciation/utility of training.
I'm not sure market dynamics will allow this any time soon. We seem to have already achieved a marginal utility equilibrium in terms of model size, so training new models on trending use-cases (e.g. synthetic data targeting tool calls, agentic workflows, computer use, etc) is really the driving force behind product differentiation. Nobody wants to admit "training new models isn't profitable" because that deflates the AGI singularity narrative that all this investment hinges on.
I'm not accountant, but I would expect Pizza Hut's accounting is significantly more complex than Anthopic's. 50+ year old global franchise with physical supply chain partnerships vs an upstart SAAS company?
Your instincts are good here. Whatever complexity Pizza Hut has it comes from being the weakest of the Yum! Brands siblings — KFC carries the international profit, Taco Bell owns domestic. Pizza Hut is slow growth, perpetual restructuring, and a weird inherited obligation to always serve Pepsi.
The world labor market is ~35T USD yearly, and so that is roughly the order of magnitude to balance against frontier model training cost. E.g. Dario Amodei has his "data center of PhDs" level where he assumes that's "good enough" to stop training frontier models; so if that can take even 5% of global labor market that's ~1.5T a year revenue, balanced against current model training costs of ~1B. 3 orders of magnitude might get us to PhD level? I think that is ultimately the bet the big AI companies are making. Even if 1T is the cost of PhD level AI then three/four companies could depreciate that over 4-5 years sharing that 5% of global market.
Of course a model does not really depreciate, the problem is they are forced by competitive pressure to offer newer/better models at the same price.
This is what the elites of the gilded age called "ruinous competition", and the solution today will be the same as back then: monopoly power. This has been the business plan of the tech VC industry for 25+ years.
The models don't learn without training, and they have finite context windows. As software updates around the world, don't they have to be trained on the new information to stay up to date?
Fair, but in this context people are generally contemplating the need to replace the model with a new, much larger and more expensive model, not just refresh the training set.
It's partly about updating what it "knows", but more about keeping up with competitive pressure on capabilities.
I’m actually not familiar enough to know. Can models be refreshed for cheaper? I thought due to the black box nature of them that there would be no difference between updating and generating a whole new model.
Maybe they can get to a “good enough” level where the next model isn’t 10x the price but if the business model requires ever increasing sizes to paper over the r&d costs from the previous set then I don’t understand how they would transition to profitability
People? There's a guy upthread quoting the Anthropic CEO on how they view the value of increasing training against the offset of the entirety of the $35T worldwide labor market... It's not "people". It's the salesmen.
Elon's superpower is commanding insane valuation premiums. The trouble with this is that "the bill eventually comes due", so to speak, which forces Elon's companies to take wilder and wilder bets, or to make wilder and wilder promises.
With telsa it was robotaxis, and when that failed to materialize, humanoid robots (fucking LOL).
SpaceX is an even more insane example. They are eyeing an IPO at a 1.5 trillion valuation. And yet the market for satellite launches is simply not that big. (What would you do with a satellite, if I gifted you one for free?). Estimates have SpaceX doing about $3B in annual earnings, which would give them a 500x earnings multiple at a 1.5T valuation (Apple: 35).
And so SpaceX/Elon had to invent the absolutely idiotic idea of "data centers in space" to sell some future vision of tens of thousands of launches per year.
He keeps upping the ante (and the ridiculousness of the vision), and so far investors keep funding it.
Me? I've realized that this madness is entirely "opt-in" and I choose to simply...not opt-in.
What would you do with a satellite, if I gifted you one for free?
Let's forget orbital mechanics for a while to make this answer more fun. It would follow me around and provide a dedicated, private lifeline of communication anywhere I go, real-time aerial surveillance of my surroundings, and eventually lasers to zap anyone who pisses me off.
Yes, and the reality is that any of those would require a fairly large constellation of satellites. I guess the play is that many large constellations of satellites will be launched.
> Estimates have SpaceX doing about $3B in annual earnings
Ummm that information seems terribly out of date or is just uninformed- Starlink alone is estimated around $8 billion for 2024 and projected around $12 billion for 2025, with continued growth.
Voting is not a monolithic process. It's actually a combination of 3 things:
- How votes are cast
- How votes are counted
- How votes are custodied
In order for an election to be trusted, all three steps must be transparent and auditable.
Electronic voting makes all three steps almost absolutely opaque.
Here's how Mexico solves this. We may have many problems, but "people trust the vote count" is not one of them:
1. Everyone votes, on paper, in their local polling station. The polling station is manned by volunteers from the neighborhood, and all political parties have an observer at the station.
2. Once the polling station closes, votes are counted in the station, by the neighborhood volunteers, and the counts are observed by the political party observers.
3. Vote counts are then sent electronically to a central system. They are also written on paper and the paper is displayed outside the poll both for a week.
The central system does the total count, but the results from each poll station are downloadable (to verify that the net count matches), and every poll station's results are queryable (so any voter can compare the vote counts displayed on paper outside the station to the online results).
Because the counting is distributed, results are available night-of in most cases.
Elections like this can be gamed, but the gaming becomes an exercise in coercing people to vote counter to their preference, not "hacking" the system.
**
Edit: Some people are confused about what I mean by "coerced." Coerced in this case means "forced to vote in some way."
The typical way this is done is as follows:
- The "coercer" obtains a blank ballot (for example, by entering the ballot box and hiding the ballot away).
- The blank ballot is then filled out in some way outside the poll station.
- A person is given the pre-filled ballot and threatened to cast it, which they will prove by returning a blank ballot.
- Rinse and repeat.
This mode of cheating is called the "revolving door" for obvious reasons.
What I failed to understand is why only in the US the voting procedure is so controversial. Want paper vote? That's racism. Want counting in a day? That's xenophobia. Want to limit certain time window for counting? That's definitely racism. It's funny that the US criticized that EU countries were getting less democratic. Well, at least those countries have a much more sane voting process.
> Want paper vote? That's racism. Want counting in a day? That's xenophobia. Want to limit certain time window for counting? That's definitely racism.
This characterization is reductive and basically a straw-man.
The principle underlying opposition to "counting in one day" is basically that every vote that is correctly placed in time should be counted, and as many people as possible should have access to voting. Mail-in voting, for example, has been shown to increase voter turnout by making voting more convenient, but you have the question of what to do with ballots that are received late. There are pretty good arguments for counting all mail-in ballots that are postmarked before the election, and I don't think "xenophobia" is among them.
In America specifically, all decisions relating to access to voting are considered against a backdrop of our widespread and systematic attempt to restrict voting. A modern example of this is related to wide disparity in the number of polling places, and therefore the amount of time required to vote, in "urban" regions of some southern states as compared to rural regions.
I have never heard of a racism-based opposition to paper ballots. I think you just made that up.
I think these claims are badly miscontrued at best, and match one party's outlook. The Republican Party has tried inhibiting voting in ways that benefit them, often by making it more difficult for minorities to vote.
Many of those tactics existed on a large scale in the South before the Voting Rights Act, and when the Supreme Court recently invalidated the Act, many have returned. For example, reducing voting locations in minority areas so people have to travel far and wait longer. Texas and possibly other states have criminalized errors in voter registration (iirc), making it dangerous to register voters. Georgia, and others, conducted a large-scale purge of voting rolls, requiring people to re-register. Requiring government-issued ID prevents many people from voting, often poor people and immigrants who lack what wealthier people are accustomed to. Florida's voters passed a ballot measure enabling ex-felons to vote; the Republicans added a law requiring full restitution to be paid (iirc) before they could vote, effectively canceling the ballot measure vote. And these days almost any Democratic victory is called fraud; remember the 2000 election, the lawsuits, riots, threats against ordinary citizens working on local election boards and on elections, etc.
Directly addressing the parent's claims: I've never heard of paper votes being called racism - could you share something with us? Calls to limit counting are often accompanied by calls to limit the voting period, invalidate votes received later (e.g., due to US mail delays), and calls to greatly restrict mail-in voting - all things that make it more difficult for people working two-three jobs.
The Democrats have their flaws; I've never seen them try to limit voting. That should be something everyone in the US - and in the world - agrees on: Do all we can to enable everyone to vote.
There are historical factors that contribute to those things you brought up. American minorities are disproportionately affected by things like limited hours, for example. You'd know that if you were an American POC.
GP has also taken these issues and personalized them. They're about impact and access, not whether the person raising the idea is racist or a xenophobe or whatever.
You'll find those claims in sibling comments to yours, so they are clearly pretty real!
(At the time of writing this comment there's a sibling claiming that the comment cannot possibly understand this POV because they are not "an American POC.")
The specific comment by popalchemist you're referring to is actually fine (they're talking about voter suppression, which is a problem in the US), and isn't at all one of the claims that hintymad says people are making.
Politicians just use those accusations as cover for conducting fraud or enabling the conditions that they inherently benefit from. There's no reason to not use paper, ID checks, and same-day accounting.
> There's no reason to not use paper, ID checks, and same-day accounting.
Sure there is. ID checks make it impossible for people who don't have government-issued ID to vote, which is a lot of people; and furthermore ID checks don't actually improve election security. Same-day counting is impossible if you are going to count all mail-in votes that were sent before the deadline.
To be clear, I'm not saying that politicians aren't agitating for conditions that benefit them. That's there job. But I also believe in supporting access to voting and fair elections, and at least some of the politicians' arguments help achieve those ends.
Yeah, I forgot voter ID. All democratic countries mandate voter ID except the US and another couple(?). Yeah, as if only the US has the "voter access" problem
There are many reasons not to do those things, "lalala not listening" isn't an excuse.
It's usually very simple, too. For voting ID: ID isn't evenly distributed, and that's not an opinion, that's a fact.
So if you require ID, then obviously you will suppress some demographics more than others. That creates a bias. Again, not opinion.
This can be solved. You will notice none of the people championing voter ID make even a thinly-veiled attempt to solve it. Instead they say stupid things like "oh wow so black people can't get ID now? Uh, buddy, I think YOU'RE the racist one!"
Surely what you want is to enable everyone to vote, and then to count all the votes?
In the UK where I have most experience of this stuff, there are many, many small polling stations, and usually you just walk right in and vote without queueing. The longest I ever had to wait to vote was about 30 minutes. Votes are counted locally and results usually declared within a handful of hours. Some take longer due to recounts etc if the tally is very close in a certain area, but the whole thing is pretty uncontroversial and pretty low-effort.
Here in Australia, voting is compulsory, it's always on a Saturday, and there's usually a charity sausage-sizzle at the polling place, it's sorta fun. And again, AFAICT (I'm not a citizen yet) the infrastructure is over-provisioned so people aren't waiting around forever.
From what I hear about the US, in some places voting can take hours, it seems like the number of polling places is deliberately limited to make it hard for people to vote, and you have those weird/horrible rules cropping up like it being illegal to hand out water to people in line, which seems purely designed to discourage electoral participation. And then you have all these calls to stop the count after a certain time etc.
It's deeply weird from an outside perspective. If counts are taking too long, if people are having trouble voting, provision more... but of course it seems clear that there are motives for underprovisioning, because one or other group thinks it will benefit them.
> Want counting in a day? That's xenophobia. Want to limit certain time window for counting?
Why do either of these matter? If you assume paper voting in-person is secure, then there is zero reason to also limit the time spent counting or the time window for counting. Anything past that point is clearly trying to fill some sort of agenda for the sake of disenfranchising people who cannot adhere to the times you're trying to set.
How we do it in Idaho, which I think is pretty much the ideal level.
1. Everyone votes on paper.
2. An electronic tallying machine tallies the vote.
3. Vote counts are sent to a central system, IDK if it's electronic or not.
4. Candidates can challenge and start a hand recount at anytime.
I think this combo is pretty close to the ideal. The actual ballots are easy to audit. Discrepancies can be challenged. And the machine doing the tallying isn't connected to the internet, it's just a counting tool that gets the job done fast.
For people with disabilities, poll workers can come in and help with the vote.
If you’re willing to do away with the secret ballot, you can eliminate a lot of the need for transparency in the mechanics. If people are able to check their own vote for discrepancies and speak to others to confirm their validity, you only really need to confirm that the final vote count is tabulated correctly (which again, is relatively easy to independently verify).
> If you’re willing to do away with the secret ballot
We're not willing to do that. No modern democracy has public ballots. The reason is simple: secret ballots make it effectively impossible to buy votes, as there's no way to prove how any person actually voted.
You’re making a choice between making it impossible to buy votes and impossible to verify votes. Both come with tradeoffs that can be mitigated, whether that be investigating and prosecuting attempts at bribery in one case or maintaining a strict chain of custody in the other. The decision ultimately comes down to a judgement call on regarding your priorities. I don’t think eliminating the secret ballot should be dismissed out of hand, given most voting was conducted without it prior to the late 19th century.
>Elections like this can be gamed, but the gaming becomes an exercise in coercing people to vote counter to their preference, not "hacking" the system.
If that's gaming the system, what even is the point of voting?
With all due respect, I don't think you understand what the "worst case" scenario looks like for global warming, and how close we are to that scenario. For reference, check out figure 1 in this nature article [1].
That has warming by 2300 as 8C in an "emissions continue current trends" path.
Here's chatgpt giving a picture of what 8C warming looks like. Speculative, hallucinations, caveat emptor, etc...but to give a sense of proportion this, last time the earth was 8C *cooler* than now, ice covered 25% of the planet:
> At +8°C, Earth is fundamentally transformed. Large parts of today’s populated zones—South Asia, the Middle East, Africa, southern Europe, the southern U.S.—are functionally uninhabitable for humans outdoors. Wet-bulb temperatures regularly exceed survivable limits. Agriculture collapses across the subtropics; even mechanized, climate-controlled farming is marginal. Most of the world’s food comes from high-latitude regions: a narrow band across northern Canada, Scandinavia, and Siberia. Sea levels are dozens of meters higher, drowning coastal megacities; Miami, New York, Shanghai, and London are gone. Phoenix is lifeless desert. Seattle is coastal tundra, wetter but still survivable.
> Civilization persists only in fragments. Mass migration and resource wars have rewritten borders. Population is a fraction of 21st-century levels. Global trade, universities, and modern governance are mostly memories. Local, self-sufficient polities dominate. The United States as an institution likely dissolves or transforms beyond recognition—2 out of 10 chance of recognizable survival. Harvard or MIT survive, if at all, as digital archives or autonomous AI-driven knowledge systems—3 out of 10. The world would still have people and culture, but not civilization as we know it.
Edit: I would appreciate knowing why I'm getting downvoted when I added citations for *possible* warming paths (from nature!). Yes, the chatgpt explanation is speculative but I mean, look at the thread we're discussing.
Part of the problem of getting +/- 8C of different global temperature is the speed of it. https://xkcd.com/1732/ shows a timeline that goes back to 20000 AC, where global average temperature was like 5ºC less. There has been changes, but also adaptation. Now in less than 200 years we increased 2ºC, and the speed of change has increased, it was around 10 years ago when we reached 1ºC over preindustrial times, and now we are at 1.5ºC.
And without adaptation you get mass extinction. And the human system may be pretty fragile against the disappearance or deep change of key components of the global system.
I appreciated your comment. I’ll also note that the path to that future will not be fun - you/chatgpt describe a kind of end state 275 years away, but things will evolve to that state over time. I suspect the downvotes may reflect people’s desire not to face the likely reality.
About 40% of AI infrastructure spending is the physical datacenter itself and the associated energy production. 60% is the chips.
That 40% has a very long shelf life.
Unfortunately, the energy component is almost entirely fossil fuels, so the global warming impact is pretty significant.
At this point, geoengineering is the only thing that can earn us a bit of time to figure...idk, something out, and we can only hope the oceans don't acidify too much in the meantime.
Interesting. Do you have any sources for this 60/40 split?
And while I agree that the infrastructure has a long shelf life, it seems to me like an AI bubble burst would greatly depreciate the value of this infrastructure as the demand for it plummets, no?
Allied General | Northern Mexico | REMOTE (Mexico) with frequent travel or ONSITE | Full time
-
Can you handle projects that seem unsexy and uncool but are actually incredibly gratifying and great businesses? Read on. We’re building software to help small manufacturing firms ($10 - $20MM in revenue) build things with higher quality and speed. Help us help small firms compete against the big guys by delivering manufacturing more consistently and with higher quality.
Three pilot customers/design partners to start working with, on both sides of the border.
The company is not VC backed and probably won’t be for a long time. Good wages, lots of work. Good people. Not a lot of bullshit.
Us: Highly technical founding team with track record of success. Tiny tech team, large manufacturing team.
You: True full-stack. You enjoy shipping features, not code. You keep things simple. The role is “founding engineer”: you must be ok with a tiny team and working mostly by yourself.
The current "futurist" vision is one of humanoid robots taking over many/most jobs done by humans today, but - as someone that routinely hires human welders & assemblers - the dexterity required for most ad-hoc tasks seems many many decades (if not more?) away from what I see robots do--yes, even the fancy chinese jumping ones.
This has led me to think one of two things:
1. The robotics revolution will not come. It's predicated on the idea that advances in robotics will follow a curve of the same shape as advances in compute/ai, which will not happen. OR...
2. There has been some paradigm-shift or some breakthrough that has put robotics improvement on a new curve.
To an outsider, what I see in robots is not categorically different than like, the sony AIBO dog in 1999. It's significantly better of course, but is it really that different? (Whereas what we can do in compute-land today is categorically diffrent because of the transformer model breakthrough).
So:
1. Have there been any breakthroughs that would lead us to believe that a robot will be able to like, look under a table to adjust a screw?
2. What are the scaling laws & practical limits to present-day robotic dexterity? Is it materials? Energy density? What?
3. What is the real rate of improvement along these key dimensions? Are robots improving linearly? Geometrically? Exponentially?
4.Or should I keep discounting robotics until we get our first robots that are made of meat? That I'd believe would result in exponential change!
reply