Hacker Newsnew | past | comments | ask | show | jobs | submit | sam's commentslogin

This is mistaken. In space a radiator can radiate to cold (2.7K) deep space. A thermos on earth cannot. The temperature difference between the inner and outer walls of the thermos is much lower and it’s the temperature difference which determines the rate of cooling.

"Radiate" is exactly what you have to do, and that is extremely slow. You need a huge area to dissipate the amount of power you are talking about.

Stupid question: Why not transfer the heat to some kind of material and then jettison that out to space? Maybe something that can burn itself out and leave little material behind?

How many times can you do that?

Consider your own computer... how often does it get hot under a regular load and the fans kick on? That "fans kick on" is transferring the heat to air and jettisoning it into the room... and you're dealing with 100 watts there. Scale that up to kilowatts that are always running.

There is a lot of energy that is being consumed for computation and being converted into heat.

The other part if that is... its a lot easier to do that transfer heat into some other material and jettison it on earth, without having to ship the rack into space and also deal with the additional mechanics of getting rid of hot things. You've got advantages of things like "cold things sink in gravity" and "you can push heat around and sink it into other things (like phase change of water)" and "you don't need to be sitting on top of a power plant in order to use the power."


Basically you concentrate the heat into a high emissivity high temperature material that’s facing deep space and is shaded. Radiators get dramatically smaller as temperature goes up because radiation scales as T⁴ (Stefan–Boltzmann). There are many cases in space where you need to radiate heat - see Kerbal Space Program

"High emissivity, high temperature" sounds good on paper, but to create that temperature gradient within your spacecraft the way you want costs a lot of energy. What you actually do is add a shit load of surface area to your spacecraft, give that whole thing a coating that improves its emissivity, and try your hardest to minimize the thermal gradient from the heat source (the hot part) throughout the radiator. Emissivity isn't going past 1 in that equation, and you're going to have a very hard time getting your radiator to be hotter than your heat source.

Note that KSP is a game that fictionalizes a lot of things, and sizes of solar panels and radiators are one of those things.


I’m not sure I understand why creating the gradient is hard - use a phase transitioning heat pump to a high surface area radiator. The radiator doesn’t have to be hotter than the heat source the radiator just has to be hot, but given the fact we are talking about a space data center, you can certainly use the heat pump to make the radiator much hotter than any single GPU, and even use the energy from the heat cycle to power the pumps, but I imagine such a data center the power draw of the heat pump would be tiny compared to the GPUs.

To be clear I’m not advocating KSP as a reality simulator, or that data centers in space isn’t totally bonkers. However the reality is the hotter the radiator the smaller the surface area for pure radiance dissipation of heat.


I am referring to the "using a heat pump to make the radiator hotter than the GPU" as "creating a thermal gradient." No matter the technology, moving heat like this is always pretty expensive in power terms, and the price goes way up if you want the radiator hotter that the thing it's cooling.

Can you point to a terrestrial system similar to what you are proposing? Liquid cooling and phase change cooling in computers always has a radiator that is cooler than the component it is chilling.

You can do this in theory, but it takes so much power you are better off with some heat pumping to much bigger passive radiators that are cooler than your silicon (like everything else in space).


Yah but the key is that it’s not the power draw that’s the issue is the dissipation of thermal energy through pure radiation. The heat of the radiator is really important because it reduces the required surface area immensely as it scales up.

However the radiators you’re discussing are not pure radiance radiators. They transfer most heat to some other material like forced air. This is why they are cooler - they aren’t relying on the heat of the material to radiate rapidly enough.

I would note an obvious terrestrial example though is a home heat pump. The typical radiator is actually hotter than the home itself, and especially the heads and material being circulated. Another is any adiabatic refrigerator where the coils are much hotter than the refrigerated space. Peltier coolers even more so where you can freeze the nitrogen in the air with a peltier tower but the hot surface is intensely hot and unless you can move the heat from it rapidly the peltier effect collapses. (I went through a period of trying to freeze air at home for fun so there you go)

For radiation of heat the equation is P = \varepsilon \sigma A T^4

P = radiated power • A = surface area • T = absolute temperature (Kelvin) • \varepsilon = emissivity • \sigma = Stefan–Boltzmann constant

This means the temperature of the material increases radiation by the fourth power of its value. This is a dramatic amount of variance at it scales. If you can expend the power to double the heat it emits 16x the heat. You can use a much lower mass and surface area.

This is why space based nuclear reactors are invariably high temperature radiators. The idea radiators are effectively carbon radiators in that they have nearly perfect emissivity and extraordinarily high temperature tolerances and even get harder at very high temperatures. They’re just delicate and hard to manufacture. This is very different than conduction based radiators where metals are ideal.


Making your radiator hotter than the thing you're pulling heat out of is very, very expensive in energy terms. This is why home AC is so expensive and why nobody uses systems like this to cool computers. All that energy has to come from a solar panel you fly, too, so you're not saving mass by doing this. You're just shifting it from cooling to power. If you need 200W to cool 100W of compute, you're tripling the amount of power you need to do that work.

Also, peltiers are less energy-efficient than compressors. That is why no home AC uses a peltier.


I have a vacuum thermos. I've been unimpressed with its ability to keep coffee hot.

In the context implied above it is the ratio of fusion energy released to laser energy on target or the laser energy crossing the vacuum vessel boundary (they are the same in this case). So it would have been more precise to say "target gain" or "scientific gain".


We are careful to always specify what kind of “breakeven” or “gain” is being referred to on all graphs and statements about the performance of specific experiments in this paper.

Energy gain (in the general sense) is the ratio of fusion energy released to the incoming heating energy crossing some closed boundary.

The right question to ask is then: “what is the closed boundary across which the heating energy is being measured?” For scientific gain, this boundary is the vacuum vessel wall. For facility gain, it is the facility boundary.


It’s the ratio of fusion energy released to heating energy crossing the vacuum vessel boundary.


Author here - some other posters have touched on the reasons. Much of the focus on high performing tokamaks shifted to ITER in recent decades, though this is now changing as fusion companies are utilizing new enabling technologies like high-temperature superconductors.

Additionally the final plot of scientific gain (Qsci) vs time effectively requires the use of deuterium-tritium fuel to generate the amounts of fusion energy needed for an appreciable level of Qsci. The number of tokamak experiments utilizing deuterium tritium is small.


Thanks a lot for this research. Seing the comments here I think it's really important to make breakthroughs and progress more visible to the public. Otherwise the impression that "we're always 50 years away" stays strong.

Here was my completely layman attempt to forecast fusion viability a few months ago. https://news.ycombinator.com/item?id=42791997 (in short: 2037)

Is there some semblance of realism there you think?


In the 2037 timeframe, modeling trends doesn’t matter as much as looking at the actual players. I think odds are good because you have at least 4 very well funded groups shooting to have something before 2035: commercial groups including CFS, Helios, TAE, also the efforts by ITER. Maybe more. Each with generally independent approaches. I think scientific viability will be proven by 2035, but getting economic viability could take much longer.


If ITER is where it's at why are we building commercial scale tokamak? https://en.wikipedia.org/wiki/Commonwealth_Fusion_Systems


Companies like Commonwealth Fusion Systems are an example of those utilizing high-temperature superconductors which did not exist commercially when ITER was being designed.


ITER uses HTSs, just not for the coils:

> The design operating current of the feeders is 68Ka. High temperature superconductor (HTS) current leads transmit the high-power currents from the room-temperature power supplies to the low-temperature superconducting coils 4K (-269°C) with minimum heat load.

Source: https://www.iter.org/machine/magnets


HTS current feeds are a good idea (we also use them at CFS, my employer: https://www.instagram.com/p/DJXInDUuDAK/). It's HTS in the coils (electromagnets) that enables higher magnetic fields and thus a more compact tokamak.


This comment thread will go down in history along with the famous HN Dropbox thread.

This thing is incredible and will eventually crush the iPhone. Solves iPhone addiction while retaining the utility of an iPhone? Solid gold.


The thing is, most people don't actually want to solve their phone addiction even if they say they do.

In reality, they want to read news while waiting at a doctor's office, play games while they take the subway, and see Instagram updates from friends throughout the day.

And if you already want a less capable device, it's called an Apple Watch, but it comes with a little screen that is way more useful than laser projection, and will soon surely have a powerful LLM it can access. (And paired with AirPods it does a much better job preserving your audio privacy.)

So it's hard to see how this is going to succeed, when Apple can just copy the good part (LLM) as part of the Watch.


IMO "Solves iPhone addiction" is more or less a rephrasing of "people will quickly get bored of this".

It's just a smartphone, except you can't run third-party software, can't directly interface with it, and can't connect it to other machines. And instead of holding an N-million pixel, M-million-colour, extremely high-constrast display directly in your hand, you have to indirectly project (meaning extremely LOW contrast) a single-colour display onto your hand from a projector that's shaking around being clipped to your clothes.

The only single hypothetical upside I can see to this tech is that it might lower the two-second delay in looking at my phone caused by putting my hand in my pocket before raising my hand, but you could say that that goes against the goal of solving phone addiction.


Apple watch? Cellular mode allows this, has siri built in, can handle calling/messaging/etc. People don't want to replace their phones though.


> along with the famous HN Dropbox thread.

Most people saw the utility and the use cases of Dropbox even when it launched.

What's the utility and use case of this? What problem does it solve?


> iPhone addiction

This is not a thing. "Screens" aren't 'separating us from one another', or 'distracting us'; that's fuzzy verbalistic nonsense, made up by marketers who want to sell you non-phones, and bloviating op-ed columnists who don't have a clue. It's so ridiculous, that everyone has seen the memes debunking it.[0][1]

True invasiveness is expressed as: "how long does it take me to do this thing I want to do?" In other words, you need a human-computer interface that reduces friction as close to zero as possible. The phone won because it's the best at that. The "pin" is orders of magnitude worse, so it won't catch on.

[0]: https://xkcd.com/610/ [1]: https://imgflip.com/i/1swr7j


Maybe in the long term view - people correctly identifying that Dropbox had no differentiator (to quote Steve "this is a feature, not a product').


We only included projected values of SPARC and ITER because they're the only ones whose physics basis has been published in the peer-reviewed literature. We would certainly like to include other devices - hopefully this will encourage more teams to publish results in the literature from which we can extract the required parameters.


Thanks. We published a number of prepublication versions over the past year on arXiv to gather feedback from the physics community before submitting to Physics of Plasmas last December. The version linked to above (and in the tweet) is the peer reviewed version which was published yesterday.


My mistake! Last December was submission but publication was yesterday. Congrats and thanks for making it public access!


Author here. The black curve represents the hot-spot ignition condition for a laser inertial confinement fusion (ICF) experiment (like the NIF). This means that during the short period of inertial confinement, the self-heating exceeds all losses in the hot spot leading to an increase in temperature due to self heating. It only applies to the black 'x' points.

The Q_sci^MCF contours correspond to scientific energy gain (ratio of fusion power to heating power crossing the vacuum vessel boundary) for a magnetic confinement experiment.

For ICF we can't draw simillar Q_sci^ICF contours because the total fusion energy released depends on the degree to which the ignited hot-spot propagates a burn in the surrounding cold fuel. And this depends on other variables like the symmetry of the implosion which are not captured in this plot.

If you're curious to read more about this check out Section III.F of the linked paper (pp.10-11).


Surprising to see so much negativity here. Limiting to a small number of headlines is a useful mechanic. I can see this going in a number of interesting directions.


Agree. The places I go on the web have become more and more centralized/limited. I think projects like this that help to surface and aggregate interesting content from the web (which is really what I come to HN to find) are great.


Sure, and once it goes in an interesting direction, I think people would be more positive. This idea, poorly executed, undercuts its own claimed value, wasting time instead of saving time.

Pointing that out is useful feedback, and I'm sure it would have been done in a much harsher way 10 years ago.


Yes, but repetitive 65 headlines + 'fun facts' really isn't a small number, and also a far cry from the advertised '5 headlines'.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: