Hacker Newsnew | past | comments | ask | show | jobs | submit | OneDeuxTriSeiGo's commentslogin

Yeah on my SF86 I listed all the dumb shit I did and the investigator called obviously kind of concerned but receptive. We went through each one and his key point was "do you understand you can't do that" and as long as you answered yes, documented it on the form ahead of time, and it was obvious you weren't lying through your teeth then pretty much anything you did that wasn't in the last 3-5 years was pretty much immediately forgiven.

Some security officers are really touchy on these kinds of things and will tell you to exclude or lie but investigators pretty much never care what you did as long as it is obvious you don't plan on doing those types of things again or being an active problem.

They just want it for their records and they want you to be an open book such that they don't feel you are concealing anything problematic.


  > Some security officers are really touchy on these kinds of things and will tell you to exclude or lie
But this is the problem. It is good that the investigators don't care but the security officers are the one you meet and talk with. They set the tone. Them doing this gives people the impression that investigators will care. And frankly, some do. I don't think we can dismiss the security officer's role here.

Lee Pace is such a phenomenal actor. He really just transforms the roles he's in and makes something special out of each show he's in.

He's also fantastic in Apple TV's Foundation and it's been really impressive seeing his range put on display there.


Dig up Wonderfalls, his first series, and Pushing Daisies. They form the Lee Pace Triumvirate (with Halt and Catch Fire) as far as I'm concerned.

No it's not? It's simply an addressing model and interface. Sure you could use a fixed or centralised store but you could also use IPFS for example.

> Why is it likely? We already have a lot of MRI data. There are already a lot of incidental findings. It might also be an issue of the MRI not being able to produce enough information to discriminate.

This is the main reason. Well technically the opposite of the main reason but more or less it's the same. MRIs are extremely high fidelity nowadays and as a result it's really really hard to read an MRI. Every person is different and there's a lot of variations and weird quirks. You get all the data rather than clearly identified problem areas like you get with say a CT w/ contrast, etc.

That's actually exactly why it's important to have MRIs more frequently to be able to establish baselines and identify trends as they develop.


> That's actually exactly why it's important to have MRIs more frequently to be able to establish baselines and identify trends as they develop.

How? How do you establish baselines? How do you build a classification of incidental findings? It's very possible that you'll find a lot of types and not a lot of representatives of each type. And then you have to correlate that to actual clinical results, but the population will be so heterogeneous that it'll be really hard to find an actual result.

It's not just "let's throw more data at the problem".


When I say establish baselines what I mean is to establish baselines for the individual.

If you have records of the locations and sizes of various atypical structures and forms throughout the body going back for years and all of a sudden one of them starts changing in size at a rate disproportionate to its history, that's probably cause to dig a little deeper.

It's certainly not "throw more data at the problem". Instead it's about giving the data a time axis with some decent fidelity.


> and all of a sudden one of them starts changing in size at a rate disproportionate to its history, that's probably cause to dig a little deeper.

That sentence is doing a lot of heavy lifting.

- What's "disproportionate to its history"? Obviously something going from 1mm to 10cm is worth checking out, but what about something going from 1mm to 2mm? Might be a tumor, might be that the position is just slightly different.

- What about other less measurable factors? Example, border features. That's harder to measure and things like movement or different machines can change how the borders of a feature look. How do you know what's a baseline and what's not.

- How frequently do you run these scans? It's likely that if something "starts changing in size" suddenly it will start giving symptoms before you have your next scheduled scan.

> It's certainly not "throw more data at the problem". Instead it's about giving the data a time axis with some decent fidelity.

It's definitely throwing more data at the problem, and you're assuming that it's viable to give "a time axis with decent fidelity". MRIs are much more complicated to interpret than people think, and screening is a much harder problem too. There are a lot of studies testing MRI imaging as a screening technique (among other techniques) and they don't always show an increase in survival rates.


Lead is still an order of magnitude better than the alternatives for a lot of things. ex:

- Lead is ductile, malleable, and blocks radiation very effectively making it far easier than other materials as a radiation barrier.

- Lead is one of the few materials that has no resonant frequencies making it an extremely effective acoustic and vibratory dampener.

- Lead and lead compounds are far cheaper, less toxic, and more sustainable catalysts for chemical reactions. So while lead is problematic, for industrial processes it's often fairly harmless and mundane in comparison.


How does lead have no resonant frequencies?

It's because lead has one of the lowest modulus of elasticity of the elements but also has a very high density. So at room temperature it doesn't have any resonant frequencies and has so much mass that it can effectively damp frequencies traveling through it. And you can see this with the lead bell experiment.

The only metals with a lower E are far more expensive or are extremely difficult to work with because they like to catch fire or explode when exposed to water or oxygen or react with any other element they touch. Or they are radioactive.

The "non problematic" elements better than lead are indium, thallium, and selenium. Selenium has toxicity issues (albeit less than lead) and indium and thallium both have low melting points and become increasingly soft/lose their form even at low industrial temperatures. And of course all three are far far more rare than lead and often are produced as byproducts of mining for lead and occasionally other metals.

And there are non-metal options that don't resonate near room temperature like many of the plastics and rubbers but they lack the density/mass to effectively damp vibrations from neighboring components.

-------

https://www.youtube.com/watch?v=SFGlTGMlyqM

https://sciencedemonstrations.fas.harvard.edu/presentations/...


Studies have all come out clean on pacemakers and mmWave. No detectable interference in the hardware or on an EKG while in a mmWave scanner.

I could imagine other conditions potentially but pacemakers have been ruled a non issue for mmWave by academic studies (albeit I can understand still exercising caution despite that).


Have they done thorough, decades-long studies on millimeter-wave machines to ensure they have absolutely no long-term adverse health effects?


Tbh I'm not sure but they've done accelerated dosage testing to simulate long term use by repeatedly exposing people to use of the machine over a more frequent period of time.

But mmWave really just is not dangerous. Current generation 5G cellular and WiFi standards are mmWave and they are just as harmless.

Molecular damage just starts showing up with THF/terahertz emissions band but mmWave is in the EHF and is has more than 10x the wavelength of THF (i.e. it is far wider/more gentle than THF). In a very real sense mmWave can't even interact with most of the molecules in your body.

mmWave can interact with the water in your body but at the levels it's being used it's only really useful for seeing the water. You'd needs orders of magnitude more powerful emissions than what these scanners use to actually cause damage at that frequency.

i.e. It's the difference between using the flashlight on your phone to see in the dark and using the concentrated light from solar-thermal heliostats to boil water or heat molten salt. No matter how hard you try, your flashlight is never gonna boil water.


The distinction is that what they are doing for Webb is trying to dissipate small amounts of heat that would warm up sensors past cryogenic temperatures.

Like on the order of tens or hundreds of watts but -100C.

Dissipating heat for an AI datacenter is a different game. A single AI inference or training rack is going to be putting out somewhere around 100kW of waste heat. Temps don't have to be cryogenic but it's the difference between chiselling a marble or jade statue and excavating a quarry.


got it - thanks


It's worth noting that the EACTS can at maximum dissipate 70kW of waste heat. And EEACTS (the original heat exchange system) can only dissipate another 14kW.

That is together less than a single AI inference rack.

And to achieve that the EACTS needs 6 radiator ORUs each spanning 23 meters by 11 meters and with a mass of 1100 kg. So that's 1500 square meters and 6 and a half metric tons before you factor in any of the actual refrigerant, pumps, support beams, valve assemblies, rotary joints, or cold side heat exchangers all of which will probably together double the mass you need to put in orbit.

There is no situation where that makes sense.

-----------

Manufacturing in space makes sense (all kinds of techniques are theoretically easier in zero G and hard vacuum).

Mining asteroids, etc makes sense.

Datacenters in space for people on earth? That's just stupid.


> Datacenters in space for people on earth? That's just stupid.

But if completes the vision of ancestors who thought god living in the sky

So "Lord give me a sign from heavens" may obtain a whole new meaning


Your calculations are based on cooling to 20c, which is exponentially harder than cooling to 70c where GPUs are happy. Radiators would be roughly 1/3 the size of the panels for 70c.


I'm a total noob on this.

I get that vacuum is a really good insulator, which is why we use it to insulate our drinks bottles. So disposing of the heat is a problem.

Can't we use it, though? Like, I dunno, to take a really stupid example: boil water and run a turbine with the waste heat? Convert some of it back to electricity?


It's a good question, but in a closed system (like you have in space) the heat from the turbine loop has to go somewhere in order to make it useful. Let's say you have a coolant loop for the gpus (maybe glycol). You take the hot glycol, run it through your heat exchanger and heat up your cool, pressurized ammonia. The ammonia gets hot (and now the glycol is cool, send it back). You then take the ammonia and send it through the turbine and it evaporates as it expands and loses pressure to spin the turbine. But now what? You have warm, vaporized, low pressure ammonia, and now you need to cool it down to start over. Once it's cool you can pressurize it again so you can heat it up to use again, but you have to cool it, and that's the crux of the issue.

The problem is essentially that everything you do releases waste heat, so you either reject it, or everything continues to heat up until something breaks. Developing useful work from that heat only helps if it helps reject it, but it's more efficient to reject it immediately.

A better, more direct way to think about this might be to look at the Seebeck effect. If you have a giant radiator, you could put a Peltier module between it and you GPU cooling loop and generate a little electricity, but that would necessarily also create some waste heat, so you're better off cooling the GPU directly.


Thanks for the response :)

I think I get it. If we could convert 100% of the waste heat into useful power, then all good. And that would get interesting because it would effectively become "free" compute - you'd put enough power into the system to start it, and then it could continue running on its own waste heat. A perpetual motion machine but for computing.

But we can't do that, because physics. Everything we could do to generate useful energy from waste heat also generates some waste heat that cannot be captured by that same process. So there will always be some waste heat that can't be converted to useful energy, which needs to be ejected or it accumulates and everything melts.


What do you do with the steam afterwards? If you eject it, you have to bring lots of it with your spacecraft, and that costs serious money. If you let it condensate to get water again, all you did is moving some heat inside the spacecraft, almost certainly creating even more heat when doing that.


You can't easily use low grade heat.

However there are workarounds. People are talking like the only radiator design is the one on the ISS. There are other ways to build radiators. It's all about surface area. One way is to heat up a liquid and then spray it openly into space on a level trajectory towards a collecting dish. Because the liquid is now lots of tiny droplets the surface area is huge, so they can radiate a lot of heat. You don't need a large amount of material as long as you can scoop up the droplets the other end of the "pipe" and avoid wasting too much. Maybe small amounts of loss are OK if you have an automated space robot that goes around docking with them and topping them up again.


Harder to direct waste heat in space if you dont have gravity for convection.


Is this related to the Contracts RFC[1] and associated implementation[2] or is it its own freestanding work unrelated to upstream efforts?

1. https://github.com/NixOS/rfcs/pull/189

2. https://github.com/NixOS/nixpkgs/pull/485453


On this topic of ports/recomps there's also OpenGOAL [1] which is a FOSS desktop native implementation of the GOAL (Game Oriented Assembly Lisp) interpreter [2] used by Naughty Dog to develop a number of their famous PS2 titles.

Since they were able to port the interpreter over they have been able to start rapidly start porting over these titles even with a small volunteer team.

1. https://opengoal.dev/

2. https://en.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp


Thats incredible, I had no idea Jak&Daxter was written with Emacs as the primary IDE!


Lol yep. Emacs as the IDE, Allegro Common Lisp as the interpreter + HAL implementation, and GOAL itself being a Scheme-like.

Naughty Dog in general was actually a primarily Lisp studio for a long time. It was only in the PS3 era with Uncharted and The Last of Us that they switched to C++ because trying to maximise the performance out of a Lisp interpreter environment with the complexity the Cell Processors added on a time and cash budget simply wasn't feasible for them.

The Crash Bandicoot games were written in GOOL (Game Oriented Object Lisp) which they wrote prior to GOAL and the Jak and Daxter games. GOOL/Lisp of course was extremely important for the Crash Bandicoot legacy because by writing their own higher level interpreter they were given an excuse to through away the entire standard library that Sony gave them and start from scratch. That process allowed them to write a massively more performant stdlib and execution environment leading to Crash Bandicoot being able to support game environments an order of magnitude more complex than other games at the time could. And of course this allowed them to build in a system for lazy loading the environment as the player progressed through the levels which firmly cemented Naughty Dog in the video games history books.

Andy Gavin actually has an incredible blog site (including a 13 part series on Crash Bandicoot and a 5 part series on Jak and Daxter) that has over the decades documented the history of their studio's game development process and all the crazy things they did to make their games work on hardware where it really shouldn't have been able to with the tools they were provided.

https://all-things-andy-gavin.com/video-games-archive/


Oh I should issue a minor correction. After talking with some people more familiar with it than me, Crash had a lot written in GOOL but it's not 100% GOOL like how Jak is 100% GOAL.

Instead it's mostly enemy AI and the like which are built in GOOL and the game itself is instead a more traditional systems language (I believe C++). So instead of 100% it's more like 40/60 which tbh is still quite good.


Those blog posts are amazing. I never realised CB was Sony's unofficial flagship mascot to pit against Nintendo's Mario


Yeah I absolutely loved reading through them when I discovered them the first time. An absolute treat and a portal in time.


If PS2Recomp ends up giving us even a fraction of what OpenGOAL unlocked for Jak and Daxter, it could be a huge deal for the rest of the PS2 catalog


Absolutely. OpenGOAL really just set a new standard for what games preservation looks like.

It's incredible seeing the community taking a 25 year old game, modernising it with accessibility features and quality of life, and even creating entirely new expansions to the game [1].

Like beyond just keeping the game preserved on modern platforms, it's keeping the spirit of the game and the community attached to it alive as well in a way that it can continue to evolve and grow.

I can only pray that PS2Recomp makes this a fraction as accessible to other games from this era.

Oh and a similar project but on the nintendo side of the world is Ship of Harkinian by HarbourMasters [2] and the Zelda RE Team [3]. Zelda RET have half the Zelda games and are well on their way decompiling and reverse engineering the other half. And HarbourMasters have taken these decomps and used them as the groundwork for building comprehensive ports and remasters of these original games to a degree that fans could only dream that first party remasters and ports would attempt.

1. https://www.youtube.com/watch?v=PIrSHz4qw3Q

2. https://www.shipofharkinian.com/

3. https://zelda.deco.mp/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: