Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Fundamentally, it is, just in the form of a swarm. With added challenges!

Right, in the same sense that existing Starlink constellation is a Death Star.

This paper does not describe a giant space station. It describes a couple dozen of satellites in a formation, using gravity and optics to get extra bandwidth for inter-satellite links. The example they gave uses 81 satellites, which is a number made trivial by Starlink (it's also in the blog release itself, so no "not clicking through to the paper" excuses here!).

(In a gist, the paper seems to be describing a small constellation as useful compute unit that can be scaled, indefinitely - basically replicating the scaling design used in terrestrial ML data centers.)



> Right, in the same sense that existing Starlink constellation is a Death Star.

"The cluster radius is R=1 km, with the distance between next-nearest-neighbor satellites oscillating between ~100–200m, under the influence of Earth’s gravity."

This does not describe anything like Starlink. (Nor does Starlink do heavy onboard computation.)

> The example they gave uses 81 satellites…

Which is great if your whole datacenter fits in a few dozen racks, but that's not what Google's talking about here.


> This does not describe anything like Starlink. (Nor does Starlink do heavy onboard computation.)

Irrelevant for spacecraft dynamics or for heat management. The problem of keeping satellites from colliding or shedding the watts the craft gets from the Sun are independent of the compute that's done by the payload. It's like, the basic tenet of digital computing.

> Which is great if your whole datacenter fits in a few dozen racks, but that's not what Google's talking about here.

Data center is made of multiplies of some compute units. This paper is describing a single compute unit that makes sense for machine learning work.


> The problem of keeping satellites from colliding or shedding the watts the craft gets from the Sun are independent of the compute that's done by the payload.

The more compute you do, the more heat you generate.

> Data center is made of multiplies of some compute units.

And, thus, we wind up at the "how do we cool and maintain a giant space station?" again. With the added bonus of needing to do a spacewalk if you need to work on more than one rack.


> The more compute you do, the more heat you generate.

Yes, and yet I still fail to see the point you're making here.

Max power in space is either "we have x kWt of RTG, therefore our radiators are y m^2" or "we have x m^2 of nearly-black PV, therefore our radiators are y m^2".

Even for cases where the thermal equilibrium has to be human-liveable like the ISS, this isn't hard to achieve. Computer systems can run hotter, and therefore have smaller radiators for the same power draw, making them easier.

> And, thus, we wind up at the "how do we cool and maintain a giant space station?" again. With the added bonus of needing to do a spacewalk if you need to work on more than one rack.

What you're doing here is like saying "cars don't work for a city because a city needs to move a million people each day, and a million-seat car will break the roads": i.e. scaling up the wrong thing.

The (potential, if it even works) scale-up here is "we went from n=1 cluster containing m=81 satellites, to n=10,000 clusters each containing m=[perhaps still 81] satellites".

I am still somewhat skeptical that this moon-shot will be cost-effective, but thermal management isn't why, Musk (or anyone else) actually getting launch costs down to a few hundred USD per kg in that timescale is the main limitation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: