Hacker Newsnew | past | comments | ask | show | jobs | submit | jerf's commentslogin

In this scenario, that would be the people paying for the assassination. The people who want it to happen bet that it won't. The people who want to do it bet that it will. The net result is that if one of the people who bet on it happening makes it happen, they are being paid by the people betting against it, in a plausibly deniable way.

A country leader seeing someone suddenly take out a $50 million position on them not being assassinated is not the $50 million vote of confidence a naive read on the market might indicate, it's a $50 million payout to the assassin. Albeit inefficiently so, since others can take the other side of the bet and do nothing. But the deniability may be worth it.


What's even more interesting is when you consider that A) it doesn't have to be one person taking out a large position, it can be multiple people, over time, and B) the assassin doesn't have to be known or confirmed ahead of time, if someone decides their "reserve price" has been met, all they have to do to receive a payout is place the appropriate bet before performing the act.

The end result is a combination of Kickstarter and Doordash for targeted homicide.


> The end result is a combination of Kickstarter and Doordash for targeted homicide.

or kidnappers. Someone could take the opposite side, kidnap the individual and guarantee their survival for the year. When time is up they just dump them in the street and collect the bet.


I'm not sure there's any deniability in placing the "won't be assassinated" bet, when you could equally state it as "I will pay $1M to whoever accepts this bet and assassinated this person"

Anyways, how exactly is this assassin going to collect on their bet? I'm pretty sure law enforcement will be looking into the fact that somebody place that bet and then shortly after, the assassination happened.


This could make for fun anti-life insurance.

"I bet I won't die this year."

The only life insurance you get to collect on while you're alive.


"I've gone back and forth internally about whether this is healthy or not for him. I truly don't know."

On a psychological level, I don't know either. I have opinions but they haven't aged long enough for me to trust them, and AI is a moving target on the sort of time frame I'm thinking here.

However, as a sort of tiebreaker, I can guarantee that one way or another this relationship will eventually be abused one way or another by whoever owns the AI. Not necessarily in a Hollywood-esque "turn them into a hypnotized secret assassin" sort of abuse (although I'm not sure that's entirely off the table...), but think more like highly-targeted advertising and just generally taking advantage of being able to direct attention and money to the advantage of another party.

Whether or not AI in the abstract can "be your friend", in the real world we live in an AI controlled by someone else definitely can not be your friend in the general sense we mean, because there is this "third party", the AI owner, whose interests are being represented in the relationship. And whatever that may look like in practice, whoever from the 22nd century may be looking back at this message as they analyze the data of the past in a world where "AI friendships" are routine and their use of the word now comfortably encompasses that relationship, that simply isn't the sort of relationship we'd call a "friend" in the here and now, because a friend relationship is only between two entities.


Go's net/http Client is built for functionality and complete support of the protocol, including even such corner cases as support for trailer headers: https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/... Which for a lot of people reading this message is probably the first time they've heard of this.

It is not built for convenience. It has no methods for simply posting JSON, or marshaling a JSON response from a body automatically, no "fluent" interface, no automatic method for dealing with querystring parameters in a URL, no direct integration with any particular authentication/authorization scheme (other than Basic Authentication, which is part of the protocol). It only accepts streams for request bodys and only yields streams for response bodies, and while this is absolutely correct for a low-level library and any "request" library that mandates strings with no ability to stream in either direction is objectively wrong, it is a rather nice feature to have available when you know the request or response is going to be small. And so on and so on.

There's a lot of libraries you can grab that will fix this, if you care, everything from clones of the request library, to libraries designed explicitly to handle scraping cases, and so on. And that is in some sense also exactly why the net/http client is designed the way it is. It's designed to be in the standard library, where it can be indefinitely supported because it just reflects the protocol as directly as possible, and whatever whims of fate or fashion roll through the developer community as to the best way to make web requests may be now or in the future, those things can build on the solid foundation of net/http's Request and Response values.

Python is in fact a pretty good demonstration of the risks of trying to go too "high level" in such a client in the standard library.


I'm at a loss as to how some of these projects got funded in the first place. Anyone funding these should have had the perspective to see that there isn't enough power for them. Anyone funding them should have had the perspective to see that by the time power could come online for even a significant fraction of them, the depreciation and interest costs should have murdered the company trying to do it, especially if their solution to that problem is the oh-so-21st century solution of "solving" the problem of losing money by levering up. It does no good to go out of business entirely in 2027 to make the phat buxx in 2030, which seems to be the best case scenario for this space as a whole.

The other question I have is... who exactly is doing all of 1. Using AI right now 2. Making substantial money on it or getting real value and 3. Capacity constrained? Who is actually going to productively soak up all this capacity? It seems to me that bringing all this stuff online can't really make things much cheaper than they are now because the fixed costs aren't going anywhere, and if anything, trying to jam so many projects through all at once just raises those fixed costs even higher. It's not like they triple data center capacity (and increasing AI capacity by, what, 10x? 20x?), stick them full of AI systems, and into that 10x+ greater AI capacity they can sell it at the prices they are now. Higher capacity would crash the selling price but the costs would be as high or higher than now.

I am at a complete loss as to how the numbers are supposed to work here. You can't build a company in 2026 on the economy and tech infrastructure of 2036 anymore than it worked to build a company in 1999 on the economy and tech infrastructure of 2019, no matter how rosy the numbers look on the projections based on conveniently ignoring the fact the company passes through "death" in a year and half. Everything promised in 1999 happened, but trying to artificially accelerate it onto Wall Street's time line burned money by the billions. I'm sure 2036 will have lots of AI in it, but you can't just spend money to bring it forward 10 years by sheer force of will. It has to happen at its own pace.


> The other question I have is... who exactly is doing all of 1. Using AI right now 2. Making substantial money on it or getting real value and 3. Capacity constrained?

Almost all enterprise users for one. At least from what I have seen it is a massive productivity boost for coding and general research. If the costs were ~4x lower, we would be able to do much much more with them. Building datacenters will reduce the cost because increasing supply would reduce the cost.

> It's not like they triple data center capacity (and increasing AI capacity by, what, 10x? 20x?), stick them full of AI systems, and into that 10x+ greater AI capacity they can sell it at the prices they are now. Higher capacity would crash the selling price but the costs would be as high or higher than now.

This is false. Part of the costs are unit costs which are really high margin. I think the margins are around 50% to 60%. By increasing the capacity, the are bound to make even more profit.

But the other part is reflecting the lack of capacity.


"Building datacenters will reduce the cost because increasing supply would reduce the cost."

That's great for us users but I'm talking from the point of view of the people trying to make money on the data centers.

"This is false. Part of the costs are unit costs which are really high margin."

Can you explain how everybody throwing their money at nVidia lowers the costs? When they are already apparently at max capacity?

Everybody trying to build a data center at once raises the costs of the data center. Everyone competing for power has already raised power prices and we've barely begun bringing this stuff online. Everyone demanding multiples of what nVidia is producing means nVidias isn't going to reduce prices any time soon.

Your use of "even more profit" also implies that you think that the AI world is making lots of money? nVidia is making lots of money. To a first approximation, everybody else involved has lost billions. Maybe not Apple. But everyone else you can name is deep in the negative on AI.


> To a first approximation, everybody else involved has lost billions.

"Lost" implies they have nothing to show for it. But they do. Depending on who you're looking at, they have data centers, GPUs, as well as billions in revenue and hundreds of millions of users, both rapidly growing. We can't say anything is "lost" because these are investments, and will only be sunk costs if nobody ever makes money.

But people are already making money. The big names are in a growth stage, so their spending is far outpacing their returns, but if you look beyond, people are making a ton of money on AI, which bodes well for these investments. Some data points:

1. AI startups are growing revenue at a record pace, as confirmed by three separate groups adjacent to them -- investors, enterprise purchase decision makers, and Stripe (which processes their payments): https://news.ycombinator.com/item?id=46730182

2. AI is creating a boom in mobile apps, including a surge in revenue -- https://techcrunch.com/2026/01/21/consumers-spent-more-on-mo...

So much that Apple made a billion more just from their App Store cut: https://www.macrumors.com/2026/03/20/apple-made-nearly-900m-...

3. AI agents boosting holiday sales: https://www.salesforce.com/news/stories/2025-holiday-shoppin...

Keep in mind we are only ~3 years since ChatGPT kicked this whole thing off


> That's great for us users but I'm talking from the point of view of the people trying to make money on the data centers.

Why wouldn't they make money if they are the ones on whom money is thrown at?

> Can you explain how everybody throwing their money at nVidia lowers the costs? When they are already apparently at max capacity?

Increasing supply lowers the cost, I'm unsure which part of this is surprising.

> Your use of "even more profit" also implies that you think that the AI world is making lots of money? nVidia is making lots of money. To a first approximation, everybody else involved has lost billions. Maybe not Apple. But everyone else you can name is deep in the negative on AI.

The companies using AI are making money out of it. OpenAI will make money in the future but are losing it because of R&D and training.


> At least from what I have seen it is a massive productivity boost for coding and general research

Are companies release more software with less developers? If the answer is no, then the productivity has not improved. It might SEEM like it improves because you're able to produce more code and you spend less time programming, but that might not be the case in actuality.

From what I've seen, AI is very good and very popular but it hasn't improved programming productivity in a meaningful way. The bottlenecks are unchanged so writing more code faster doesn't help anything. A lot of companies let a lot of employees go due to AI, and their product velocity has noticably gone down and their quality is noticably worse.


In xAI's case, they've gotten gas turbines installed on site with which to make up the electricity generation shortfall onsite. It's unclear exactly how long that short term solution is going to be there, but probably quite a while.

Now is probably a pretty good time to start a capabilities-based language if someone is able to do that. I wish I had the time.

The primary alternatives are:

One, you don't need this. The vast majority of people working on the web are now so thoroughly overserved by their frameworks, especially the way that benchmarks like this measured only the minimal overhead the frameworks could impose, that measuring your framework on how many nanoseconds per request it consumes (I think time per request is a more sensible measure than request per time) is quintessential premature optimization. All consulting a table like this does for the vast majority of people is pessimize their framework choices by slanting them in the direction of taking speed over features when in fact they are better served by taking features over speed.

Two, you are performance bound, in which case, these benchmarks still don't help very much, because you really just have to stub out your performance and run benchmarks yourself, because you need to holistically analyze the performance of your framework, with your database, with any other APIs or libraries you use, to know what is going to be the globally best solution. Granted, not starting with a framework that struggles to attain 100 requests per second can help, but if you're in this position and you can't identify that sort of thing within minutes of scanning their documentation you're boned anyhow. They're not really that common anymore.

This sort of benchmark ranges from "just barely positive" value to a significant hazard of being substantially negative if you aren't very, very careful how you use the information.

Framework qua framework choice doesn't matter much anymore. It's dominated by so, so many other considerations, as long as you don't take the real stinkers.


There's a very wide band between "2G" and "unlimited" to explore.

Cell phone systems already have some tiering built in, at least based on the fine print I've read about my plans. Once I run out of "official data" I fall back to low-priority usage, but the cell system is generally so well-provisioned nowadays that I hardly notice. In 2026, one must take explicit action to force people back to 2G. Nothing would stop these plans from, say, simply always being "low priority usage" but at full speed, and for the most part this would satisfy everyone.

This sort of clause reeks of "it was written into a contract 15 years ago and nobody has even so much as thought about it since then" rather than some sort of choice.


All call centers are actually located in Lake Wobegon, where all the call wait times are above average.

( https://en.wikipedia.org/wiki/Lake_Wobegon#Recurring_monolog... , for the probably many people who don't know the reference.)


Any non-trivial program that has never had an optimizer run on it has a minimal-effort 50+% speedup in it.

As feedback to the author, I made the same mistake initially. It was only around halfway through when I realized the voters in question didn't necessarily care what they were voting for in the usual preferential or political sense, only that they were trying to have any consensus at all.

Looking back at the page again from the top, I see the first paragraph references Paxos, which is a clue to those who know what that is, but I think using "There’s a committee of five members that tries to choose a color for a bike shed" as the example, which is the canonical case for people arguing personal preferences and going to the wall for them at the expense of every other rational consideration, threw me back off the trail. I'd suggest perhaps the sample problem being something as trivial as that in reality, but less pre-loaded with the exact opposite connotation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: