Hacker Newsnew | past | comments | ask | show | jobs | submit | abalone's commentslogin

The code they posted doesn't quite explain the root cause. This is a good study case for resilient API design and testing.

They said their /v1/prefixes endpoint has this snippet:

  if v := req.URL.Query().Get("pending_delete"); v != "" {
      // ignore other behavior and fetch pending objects from the ip_prefixes_deleted table
      prefixes, err := c.RO().IPPrefixes().FetchPrefixesPendingDeletion(ctx)
      
      [..snip..]
  }
What's implied but not shown here is that endpoint normally returns all prefixes. They modified it to return just those pending deletion when passing a pending_delete query string parameter.

The immediate problem of course is this block will never execute if pending_delete has no value:

  /v1/prefixes?pending_delete   <-- doesn't execute block
This is because Go defaults query params to empty strings and the if statement skips this case. Which makes you wonder, what is the value supposed to be? This is not explained. If it's supposed to be:

  /v1/prefixes?pending_delete=true   <--- executes block
Then this would work, but the implementation fails to validate this value. From this you can infer that no unit test was written to exercise the value:

  /v1/prefixes?pending_delete=false   <-- wrongly executes block
The post explains "initial testing and code review focused on the BYOIP self-service API journey." We can reasonably guess their tests were passing some kind of "true" value for the param, either explicitly or using a client that defaulted param values. What they didn't test was how their new service actually called it.

So, while there's plenty to criticize on the testing front, that's first and foremost a basic failure to clearly define an API contract and implement unit tests for it.

But there's a third problem, in my view the biggest one, at the design level. For a critical delete path they chose to overload an existing endpoint that defaults to returning everything. This was a dangerous move. When high stakes data loss bugs are a potential outcome, it's worth considering more restrictive API that is harder to use incorrectly. If they had implemented a dedicated endpoint for pending deletes they would have likely omitted this default behavior meant for non-destructive read paths.

In my experience, these sorts of decisions can stem from team ownership differences. If you owned the prefixes service and were writing an automated agent that could blow away everything, you might write a dedicated endpoint for it. But if you submitted a request to a separate team to enhance their service to returns a subset of X, without explaining the context or use case very much, they may be more inclined to modify the existing endpoint for getting X. The lack of context and communication can end up missing the risks involved.

Final note: It's a little odd that the implementation uses Go's "if with short statement" syntax when v is only ever used once. This isn't wrong per se but it's strange and makes me wonder to what extent an LLM was involved.


> But there's a third problem, in my view the biggest one, at the design level. For a critical delete path they chose to overload an existing endpoint that defaults to returning everything. This was a dangerous move. When high stakes data loss bugs are a potential outcome, it's worth considering more restrictive API that is harder to use incorrectly. If they had implemented a dedicated endpoint for pending deletes they would have likely omitted this default behavior meant for non-destructive read paths.

Or POST endpoint, with client side just sending serialized object as query rather than relying that the developer remembers the magical query string.


I think this comment misses that OpenAI hired the guy, not the project.

"This guy was able to vibe code a major thing" is exactly the reason they hired him. Like it or not, so-called vibe coding is the new norm for productive software development and probably what got their attention is that this guy is more or less in the top tier of vibe coders. And laser focused on helpful agents.

The open source project, which will supposedly remain open source and able to be "easily done" by anyone else in any case, isn't the play here. The whole premise of the comment about "squashing" open source is misplaced and logically inconsistent. Per its own logic, anyone can pick up this project and continue to vibe out on it. If it falls into obscurity it's precisely because the guy doing the vibe coding was doing something personally unique.


“”” Like it or not, so-called vibe coding is the new norm for productive software development”””

Alright


Not only that, his output is insane, he has more active projects than I bother to count and more than 70k commits last year. He's probably one of, if not the best, vibe coding evangelist.

https://github.com/steipete

It also probably didn't hurt that he favors Codex over Claude.


he favors Codex?

The original name of his ai assistant tool was 'clawdbot' until Anthropic C&D'ed him. All the examples and blog posts walking thru new user setup on a mac mini or VPS were assuming a claude code max account.

I know he uses many llms for his actual software dev.. - right tool for the job. But the origins of openclaw seem to me more rooted in claude code than codex.

Which does give the whole story an interesting angle when you consider the safety/alignment angle that Anthropic pledges to (publicly) and OpenAI pretty much ignores (publicly). Which is ironic, as configuring codex cli to 'full yolo mode' feels more burdensome and scary than in Claude Code. But I'm pretty sure that speaks more to eng/product decisions, and not CEO & biz strategy choices.


Yup:

https://steipete.me/posts/2025/shipping-at-inference-speed

I've also seen later tweets of his that also confirms that codex is still his choice.


he says claude is the best model for a personal assistent, and that codex is the best model for coding

It looks like most of Peter's projects are just simple API wrappers.

Peter's been running agents overnight 24/7 for almost a year using free tokens from his influencer payments to promote AI startups and multiple subscription accounts.

  Hi, my name is Peter and I’m a Claudoholic. I’m addicted to agentic engineering. And sometimes I just vibe-code. ...  I currently have 4 OpenAI subs and 1 Anthropic sub, so my overall costs are around 1k/month for basically unlimited tokens. If I’d use API calls, that’d cost my around 10x more. Don’t nail me on this math, I used some token counting tools like ccusage and it’s all somewhat imprecise, but even if it’s just 5x it’s a damn good deal.

  ...  Sometimes [GPT-5-Codex] refactors for half an hour and then panics and reverts everything, and you need to re-run and soothen it like a child to tell it that it has enough time. Sometimes it forgets that it can do bash commands and it requires some encouragement. Sometimes it replies in russian or korean. Sometimes the monster slips and sends raw thinking to bash.

So creating unsafe software is the new norm?

I’d bet good money that at leasy 2/3 of all software ever made, the decision makers couldn’t care less about security beyond "let’s get that checkbox to show we care in case we get sued". Higher velocity >> tech debt and bugginess unless you work at nasa or you're writing software for a defibrillator, especially in the current "nothing matters more than next quarter results".

I have worked over two decades creating government software, and I can say that this is not new.

Security (and accessibility) are reluctant minimum effort check boxes at best. However, my experience is focused on court management software, so maybe these aspects are taken more seriously in other areas of government software.


Yes pretty much. See the Windows 11 security vulnerability chaos going on.

> the new norm

More like the same as it always has been.


Are you confusing normal with norm?

Always has been.

Nah, it was normal but not the norm

Taylor Lorenz has done excellent reporting on this. It's a right wing censorial moral panic that's forced some Democrats to go along with it by positioning it as "protecting kids". This legislation is moving at a fast clip and we have to fight back.

* SCREEN Act age verification with huge implications for all online privacy: https://www.youtube.com/watch?v=8bnp3nmpK9g&list=PLu4srHCWJr...

* Abolishing Section 230, the law that protects platforms like this from being sued for user content (just published today): https://www.youtube.com/watch?v=_eqt8vrtP-U&list=PLu4srHCWJr...

* UK online safety act (it's not just the U.S.) - interview with the lawyer defending 4chan: https://www.youtube.com/watch?v=DD3PGp9RhTw&list=PLu4srHCWJr...


> using wireless communication means even less bandwidth between nodes, more noise as the number of nodes grows, and significantly higher power use

Space changes this. Laser based optical links offer bandwidth of 100 - 1000 Gbps with much lower power consumption than radio based links. They are more feasible in orbit due to the lack of interference and fogging.

> Building data centres in the middle of the sahara desert is still much better in pretty much every metric

This is not true for the power generation aspect (which is the main motivation for orbital TPUs). Desert solar is a hard problem due to the need for a water supply to keep the panels clear of dust. Also the cooling problem is greatly exacerbated.


You don’t need to do anything to keep panels with a significant angle clear of dust in deserts. The Sahara is near the equator but you can stow panels at night and let the wind do its thing.

The lack of launch costs more than offset the need for extra panels and batteries.


What’s your source for that claim? Soiling is a massive problem for desert solar, causing as high as 50% efficiency loss in the Middle East.[1]

[1] https://www.nlr.gov/news/detail/features/2021/scientists-stu...


A relevant quote from that article.

“The reason I concentrate my research on these urban environments is because the composition of soiling is completely different,” said Toth, a Ph.D. candidate in environmental engineering at the University of Colorado who has worked at NREL since 2017. “We have more fine particles that are these stickier particles that could contribute to much different surface chemistry on the module and different soiling. In the desert, you don’t have as much of the surface chemistry come into play.”


You’re not summarizing the article fairly. She is saying the soiling mechanisms are environmentally dependent, not that there is no soiling in the desert. Again, it cites an efficiency hit of 50% in the ME. The article later notes that they’ve experimented with autonomous robots for daily panel cleaning, but it’s not a generally solved problem and it’s not true that “the wind takes care of it.”

And you still haven’t provided a source for your claim.


I’m saying the same thing she is, that soiling isn’t as severe in the desert not that it doesn’t exist.

The article itself said the maximum was 50% and it was significantly less of a problem in the desert. Even 50% still beats space by miles, that only increases per kWh cost by ~2c the need for batteries is still far more expensive.

So sure I could bring up other sources but I don’t want to get into a debate about the relative validity of sources etc because it just isn’t needed when the comparison point is solar on satellites.


You are again misquoting the article. She did not say soiling was "significantly less of a problem" in the desert. She in fact said it "requires you to clean them off every day or every other day or so" to prevent cement formation.

You claimed it was already a solved problem thanks to wind, which is false. You are unable to provide any source at all, not even a controversial one.

And that's just generation. Desert solar, energy storage and data center cooling at scale all remain massive engineering challenges that have not yet been generally solved. This is crucial to understand properly when comparing it to the engineering challenges of orbital computing.


Now you make me want to come up with a controversial source. The Martian rovers continued to operate at useful power level for decades without cleaning.

But but lack of water…


Anyway here’s some actual science on why going vertical makes a big difference.

https://link.springer.com/article/10.1007/s11356-022-19171-5


Thank you for providing a source. That’s an early stage research paper, not the proven solution you originally implied. There are tons of early stage research papers on all these problems on earth and in space. Often we encounter a bunch of complications in applying them at scale such as dew-related cementation[1], which is a key reason why they haven’t been deployed at sufficient scale.

That you point to the Mars rover, a mission with extremely budgeted power requirements, as proof of how soiling doesn’t pose an impediment to mega scale desert solar farms, only underscores the flaw in your reasoning.

[1] https://www.sciencedirect.com/science/article/abs/pii/S22131...


“I don’t want to get into a debate about the relative validity of sources etc”

> Not the proven solution

Yet you quote a paper saying it can work. “This impact can have a positive or negative effect depending on the climatic conditions and the surface properties.”

I have no interest in debating with you because I don’t believe you are capable of a honest debate here. The physics doesn’t change and the physics is what matters.

> doesn’t pose an impediment

Nope. I said it beats “space” not that soiling doesn’t exist. That’s what you have to demonstrate here and you have provided zero evidence whatsoever supporting that viewpoint. Hell they could replace the entire array every 5 years and it would still beat space.. Even if what you said was completely true, you still lose the argument.


The argument here is simply over your false claim that "You don’t need to do anything to keep panels with a significant angle clear of dust in deserts." Your only source does not, in fact, establish that, and cementation is in fact a challenge with desert solar -- something that happens much faster than every five years.

Repeating unsupported claims and declaring yourself the winner does not, it turns out, actually help you win an argument.


Shouldn't swarms of quadcopter drones zipping around the panels be able to handle that?

Wouldn't even need to be that 'autonomous', since the installation is fixed.

More like the things simulating fireworks with their LEDs in preprogrammed formation flight over a designated area.


You don't need quadrocopters. Solar panels arranged in rows have rails that cleaning robots can drive on.


Indeed, that seems unnecessarily complex for what is actually needed. I don't understand why the great grandparent comment seems to suggest it's an "unsolved" problem - as if grid-scale solar buildouts don't already have examples of things like motorized brushes on rails for exactly this already.

And it's always a numbers game - sure they're not /perfect/, but a few % efficiency loss is fine when it's competing against strapping every kilo of weight to tons of liquid hydrogen and oxygen and firing it into space. How much "extra" headroom to buffer those losses would that equivalent cost pay for?

And solar panels in space degrade over time too - between 1-5% per year depending on coatings/protections.


The same panel produces much more electricity in space than at the bottom of the atmosphere, because the atmosphere already reflects most of the light. Additionally, the panel needs less glass or no glass in space, which makes it lighter and cheaper.

Launch costs have shrunk significantly thanks to SpaceX, and they are projected to shrink further with the Super Heavy Booster and Starship.


Space doesn't really change it though because the effective bandwidth between nodes is reduced by the overall size of the network and how much data they need to relay between each other.


Yup. We don't use fibre optics on earth rather than lasers because of some specific limitation of the earth's surface being in orbit would avoid.

We use them because they're many orders of magnitude cheaper and simpler for anywhere near the same bandwidth for the distances required.


> We don't use fibre optics on earth rather than lasers because of some specific limitation of the earth's surface being in orbit would avoid.

That's incorrect. Lasers can suffer from atmospheric interference and fogging on earth.

Here is a post from NASA explaining why they like laser communications better than RF in space.[1]

[1] https://solc.gsfc.nasa.gov/modules/kidszone7/mainMenu_textOn...


> It makes far more sense to build data centers in the arctic.

What (literally) on earth makes you say this? The arctic has excellent cooling and extremely poor sun exposure. Where would the energy come from?

A satellite in sun-synchronous orbit would have approximately 3-5X more energy generation than a terrestrial solar panel in the arctic. Additionally anything terrestrial needs maintenance for e.g. clearing dust and snow off of the panels (a major concern in deserts which would otherwise seem to be ideal locations).

There are so many more considerations that go into terrestrial generation. This is not to deny the criticism of orbital panels, but rather to encourage a real and apolitical engineering discussion.


> A satellite in sun-synchronous orbit would have approximately 3-5X more energy generation than a terrestrial solar panel in the arctic.

Building 3-5x more solar plants in the Arctic, would still be cheaper than travelling to space. And that's ignoring that there are other, more efficient plants possible. Even just building a long powerline around the globe to fetch it from warmer regions would be cheaper.


> Building 3-5x more solar plants in the Arctic, would still be cheaper than travelling to space.

Well first you have to make solar panels works in the polar nights, in winter they have a few minutes of sun in the day at most.


> Even just building a long powerline around the globe to fetch it from warmer regions would be cheaper.

Deserts have good sun exposure and land availability but extremely poor water resources, which is necessary for washing the sand off the panels. There are many challenges with scaling both terrestrial and orbital solar.


I wasn't thinking of going THAT far. Northern Canada/Alaska is in the arctic region, so build the line some thousand miles down to the sunny parts of Canada/USA and call it done. Not like this is particularly hard, probably not even that expensive, compared to a million satellites/future space-debris. Greenland would probably be also a good location.


Sunlight is unevenly distributed in the arctic during the year to say the least.


There are plenty of legit concerns here about e.g. the launch externalities which are actually greater than the launch costs themselves, i.e. climate impact to future generations.

However one flaw in this critique is that is only looks at the cost of ground-based solar panels and not their overall scalability. That is, manufacturing cost is far from the only factor. There is also the need for real estate in areas with good sun exposure that also have sufficient fresh water supply for cleaning.

When we really consider the challenges of deploying orders of magnitude more terrestrial solar, it really requires a more detailed and specific critique of the orbital vision. Positive includes near continuous solar exposure (in certain orbits) and no water requirements.

Much has been said of cooling but remember, there is a lot of literal space between the satellites for radiative cooling fins. It is envisioned they would network via optical links, and each mini satellite would be roughly on the order of a desktop GPU (not a whole data center rack). The vision is predicated on leveraging a ton of space for lots of mini satellites on the order of a Dell desktop tower. The terrestrial areas that are really cold are also not that great for solar exposure.

Personally I don't know how it will play out but the core concern I have about making these kinds of absolutist predictions is they make weak assumptions about the sustainable scalability of terrestrial power. And that is definitely the case here in that it only looks at the manufacturing cost of solar.


> There is also the need for real estate in areas with good sun exposure that also have sufficient fresh water supply for cleaning.

Solar panels are 20x more efficient than growing corn for ethanol. Swap out some of those 30 million acres of ethanol corn fields (in the US) and you'll have more energy than you need.

More details here: https://www.youtube.com/watch?v=KtQ9nt2ZeGM


Utility scale PV farms should be seen as literally harvesting solar power, not generating it, while still allowing other agriculture like sheep grazing to occur using the same fields.

You plant a PV panel and add its irrigation (power interconnect) and remote monitoring, then you harvest power for the next 25+ years.

Ethanol production excess is a specific US problem because of the misalignment of incentives and lobbying.


I’m all for it — converting just a third of that land to solar would be enough to power the grid in terms of raw output — but there is still a huge, unsolved problem of energy storage that scale. Without that you’re only powering your data center for five hours a day.


At least in one case the authors claimed to use ChatGPT to "generate the citations after giving it author-year in-text citations, titles, or their paraphrases." They pasted the hallucinations in without checking. They've since responded with corrections to real papers that in most cases are very similar to the hallucination, lending credibility to their claim.[1]

Not great, but to be clear this is different from fabricating the whole paper or the authors inventing the citations. (In this case at least.)

[1] https://openreview.net/forum?id=IiEtQPGVyV


I counted 15 hallucinated citations. The authors explanation is plausible, but it is still 15 citations to works they clearly have not read. Any university teaches you that citing sources you personally have not verified supports you claim(s) is fraudulent. Apologizing is not enough, they should retract the article.


What makes you say they "clearly have not read" their citations? Are you assuming that because they used ChatGPT to generate the citation section based on their description of the papers that they haven't read the papers? Are you suggesting that their clarifications of which real papers the ChatGPT citations were meant to map to are fake, and if so which ones?


I tried clicking on every number on this site and none of them linked to any primary sources.

I clicked through on the first news item I saw, "Security forces open fire on woman filming them."

This led to a post on X captioned 'Yasuj; "Firing a shotgun at a lady who was filming."'

The attached video[1] did not show a weapon. It appeared to show uniformed forces on motorbikes and some kind of muted firing sound.

A subsequent comment said: "Don't write the wrong text, it's marking with paintball so the operations team can arrest him. The sound of a shotgun is like this, don't give wrong information."

To be clear, this is not meant to defend security forces firing paintballs at or arresting people recording them, just calling into question the integrity of this particular claim suggesting lethal force, and the overall lack of support for the figures claimed.

[1] https://x.com/ManotoNews/status/2008440556867702830


I don't know about this specific incident but Iranian forces have a history of firing at protestors with shotguns including last week

https://iranwire.com/en/features/122471-crackdown-survivor-5...

https://www.telegraph.co.uk/world-news/2025/12/31/armed-offi...

https://www.amnesty.org/en/latest/press-release/2021/07/iran...


I just think it's funny he has an obnoxious snow animation all over his screed against arbitrary graphic elements.


I am skeptical as well BUT on the cooling question, which is one of the main concerns we all seem to have, the article is doing a bit of an apples-to-oranges comparison between the ISS and a cluster of small satellites.

It cites the ISS's centralized 16kW cooling system which is for a big space station that needs to collect and shunt heat over a relatively large area. The Suncatcher prototype is puny in comparison: just 4 TPUs and a total power budget of ballpark 2kW.

Suncatcher imagines a large cluster of small satellites separated by optical links, not little datacenter space stations in the sky. They would not be pulling heat from multiple systems tens of meters away like on the ISS, which bodes well for simpler passive cooling systems. And while the combined panel surface area of a large satellite cluster would be substantial, the footprint of any individual satellite, the more important metric, would likely be reasonable.

Personally I am more concerned with the climate impact of launches and the relatively short envisioned mission life of 5 years. If the whole point is better sustainability, you can't just look at the dollar cost of launches that don't internalize the environmental externalities of stuff like polluting the stratosphere.


In theory rocket launches sound bad, with burning fuels all the way up to the top layers of the atmosphere, but it's not clear right away that we're significantly increasing the "burnt up stuff" vs say, the ~100 tons of meteorites that hit every night.

Arguments re: Methane as a non-renewable resource are of course right, except that we technically can synthesize methane from CO2 + electricity (e.g., terraform industries), but the pollution angle is presented as-is, without a systematic analysis, right?

What's the actual atmospheric burden here?

This essentially says "We dont know"

https://news.climate.columbia.edu/2025/03/04/rockets-affect-...


Starlink v2 Mini has about 35 kW of solar power at peak irradiance. 2 kW is quite far from the limit of how much juice we can pack into modern mass produced satellites.


Got any guesses about energy used for propulsion, cooling solutions (energy used for them as well as overall capacity), communications and how those might degrade over time in a real environment rather than just academic theory?

That's not even considering the increase in exposure to radiation outside of the Earth's atmosphere (absorbing materials) and weakened at distance protective EM field.


Some of the proposals are much much bigger than this. Five GW, and 16 square kilometres.

It’s amusing that the article points out how large the radiators will have to be, when the proposals already include building giant radiators. Or that the satellites will have to be vastly larger than the ISS; surprise, surprise, that’s also part of the plan.


There’s a weird thing in discussions about space. Lots of people just don’t like space, it makes them think they’re being blasted with science fiction.

So much criticism of space seems to fall into a few categories:

  1. They think there were ever any serious engineers who thought STS was a good idea, (rather than congressional-pork, which is what it always was), and thus assume actual space technologists are basically always wrong about the possibility of ever creating anything new and reliable
  2. They think cost/kg to LEO is somehow a physical law, and can never be improved on
  3. If they accept that SpaceX might actually have better technology that allows new things, they still refuse to wrap their heads around 2-3 orders of magnitude cost reductions due to improved technology, they update, but mentally on the order of “it will be 50% cheaper, no big deal”
  4. They just hate Elon Musk. On this one, I’m at least sympathetic
Space based data centers are probably not going to happen in the next decade, but most criticism (including this article) just reads as head-in-the-sand criticism, not serious analysis. I’m still waiting for more serious cost-benefit analysis assuming realistic Starship mass budgets.

If I worked for SpaceX, I imagine I’d focus more on just getting more Starlink mass in orbit for at least 3-4 years, but after that, we might have spare capacity we might want to spend on orbital power loads like this.


Things like Golden Dome ruin it for everyone..


I mean why not just have a whole bunch of floating buoys doing computation on the ocean? They can probably get energy both from solar and from the tidal wave energy. Cooling certainly won't be an issue.

Communication might be a bit rough.


I think the interest in outer space comes from the lack of an atmosphere to absorb the sunlight/power.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: