Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Cooling at this scale in space is very much not a solved problem. Some individual datacenter racks use more power than the entire ISS cooling system can handle.

It's solved on Earth because we have relatively easy (and relatively scalable) ways of getting rid of it - ventilation and water.



No, I meant in space. This is a solved engineering problem for this kind of missions. Whether they can make it work within the power and budget constraints is the actual challenge, but that's economics. No new tech is needed.


> No new tech is needed.

Sure, in the same sense that I could build a bridge from Australia to Los Angeles with "no new tech". All I have to do is find enough dirt!


No, but building bridges is a good example - it's also a solved problem. Show civil engineers a river, tell them how much and what type of traffic needs to allow it, and they'll tell you it obviously can be done, they'll even tell you what structural elements will be needed and roughly how expensive they are. The problem to solve here isn't whether this can be done, but which off-the-shelf parts to use to make a design that you can afford.

We're past the point of every satellite being a custom R&D job resulting in an entirely bespoke design. We're even moving past the point where you need to haggle about every gram; launch costs have dropped a lot, giving more options to trade mass against other parameters, like more effective heat rejection :).

But I think the first and most important point for this entire discussion thread is: there is a paper - an actual PDF - linked in the article, in a sidebar to the right, which seemingly nobody read. It would be useful to do that.


> Show civil engineers a river, tell them how much and what type of traffic needs to allow it, and they'll tell you it obviously can be done, they'll even tell you what structural elements will be needed and roughly how expensive they are.

Now ask them to do the Australia / Los Angeles one.

"lol no"

The where and the scale matter.


Where: Low Earth Orbit.

Scale: Lots of small satellites.

I.e. done to death and boring. Number of spacecraft does not affect the heat management of individual spacecraft.

Much like number of bridges you build around the world does not directly affect the amount of traffic on any individual one.


> Where: Low Earth Orbit.

Challenging!

> Scale: Lots of small satellites.

So we're getting cheaper by ditching economies of scale?

There's a reason datacenters are ever-larger giant warehouses.

> Much like number of bridges you build around the world does not directly affect the amount of traffic on any individual one.

But there are places you don't build bridges. Because it's impractical.


> Challenging!

  Thus, if launch costs to LEO reach $200/kg, then the cost of launch amortized over spacecraft lifetime could be roughly comparable to data center energy costs, on a per-kW basis.

  If the [SpaceX] learning rate is sustained—which would require∼180 Starship launches/year—launch prices could fall to <$200/kg by∼2035.

  Realizing these projected launch costs is of course dependent on SpaceX and other vendors achieving high rates of reuse with large, cost-effective launch vehicles such as Starship.
> So we're getting cheaper by ditching economies of scale?

The economy of scale here is count, not size. This is also why even data centres are made from many small identical parts, such as server racks, which are themselves made from many smaller identical parts.

What makes LEO cheaper than it used to be, has been reuse. We'll see if "bigger" actually plays out as Starship continues.

> But there are places you don't build bridges. Because it's impractical.

What is and isn't practical changes as technology develops.

Look, I am skeptical of space based beamed power and space based compute, but saying any given proposal must still be bad in 2035 because it would be bad with today's tech is like betting against the growth of EVs or PV in 2015, or against the internet in 1990.

(The reverse mistake is to say that it must succeed, like anyone in 1970 who was expecting a manned Mars mission by 1980).


I humbly request 'dang to strike "read the damn article" off the list of guideline violations.


> Now ask them to do the Australia / Los Angeles one.

> "lol no"

Given how many people dream of megastructures, I bet someone has this as an interview question, some variant of https://what-if.xkcd.com/160/ — I'd guess "a few trillion, tens of trillions of USD" for floating-bridges with anchors etc., but that's just my uninformed not-a-civil-engineer guess.


It's solved for low power cooling.

We do not have a solution for getting rid of megawatts or gigawatts of heat in space.

What the sibling comment is pointing out is that you cannot simply scale up any and every technology to any problem scale. If you want to get rid of megawatts of heat with our current technology, you need to ship up several tons of radiators and then build massive kilometer-scale radiation panels. The only way to dump heat in space is to let a hot object radiate infrared light into the void. This is an incredibly slow and inefficient process, which is directly controlled by the surface area of your radiator.

The amount of radiators you need for a scheme like this is entirely out of the question.


They literally have a solution, it's a trivial one and described in the paper. I'll try to paraphrase the whole thing, because apparently no one read it.

1. Take existing satellite designs like Starlink, which obviously manage to utilize certain amount of power successfully, meaning they solved both collection and heat rejection.

2. Pick one, swap out its payload for however many TPUs it can power instead. Since TPUs aren't an energy source, the solar/thermal calculation does not change. Let X be the compute this gives you.

3. Observe that thermal design of a satellite is independent from whether you launch 1 or 10000 of them. Per point 2, thermals for one satellite are already solved, therefore this problem is boring and not worth further mention. Instead, go find some X that's enough to give a useful unit of scaling for compute.

4. Play with some wacky ideas about formations to improve parameters like bandwidth, while considering payload-specific issues like radiation hardening, NONE OF WHICH HAVE ANY IMPACT ON THERMALS[0]. This is the interesting part. Publish it as a paper.

5. Have someone make a press release about the paper. A common mistake.

6. Watch everyone get hung up on the press release and not bother clicking through to the actual paper.

--

[0] - Well, some do. Note that fact in the paper.


> This is a solved engineering problem for this kind of missions.

Which mission of this kind exemplifies the solution? Where's the datacenter in the sky to which I can point my telescope?

> Whether they can make it work within the power and budget constraints is the actual challenge, but that's economics.

It's a weird world, where economics isn't a fundamental part of engineering, any engineering proposal's got to include it, much more one that has never been done beopre.


> Which mission of this kind exemplifies the solution? Where's the datacenter in the sky to which I can point my telescope?

Big bunch of satellites communicating with each other?

Starlink.

Specifically Bus F9-2 and Bus F9-3 have PV arrays about the size needed for the upper limit of what I read a single DC rack might use (max 25kW, someone correct me if it is ever higher than that). That's what's being proposed here, making a DC by making each rack its own satellite.

Section 2.1 is seeing what data link is needed between satellites, and what you can actually get with realistic limitations, and how close the satellites need to be to make this work.


> single DC rack might use (max 25kW, someone correct me)

25kW? Don't tell me your engineers used that number in their calculations.

Reality:

GB200 NVL72 - 120 kW per rack

GB300 NVL72 - 150 kW per rack

Weight - 3,000–3,500 lbs per rack

Cost of liquid cooling on Earth - $50,000 per rack

by 2027 the new 800V-HVDC will be deployed - 1 MW per rack

I'd never imagined I'd be providing free engineering consultations to billionaires.


> 25kW? Don't tell me your engineers used that number in their calculations.

Thanks!

There's a reason why I phrased it as uncertainly as I did:

I (ironically given context) googled something like "maximum power draw of data center rack" and the first few results all agreed with each other at 25 kW.

> I'd never imagined I'd be providing free engineering consultations to billionaires.

I never thought I'd be mistaken for a billionaire, but there we go.

I'm kinda curious now, which billionaire did you mistake me for?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: