Hacker Newsnew | past | comments | ask | show | jobs | submit | more cossatot's commentslogin

I wonder how much the Van Allen radiation belts are a contributor to the Fermi paradox, i.e. how much they contributed to providing a suitable place for life to originate and flourish, and how rare they are.

The belts themselves are an effect of Earth's magnetic field, which I believe is particularly strong because of flow within the Earth's liquid iron-nickel outer core. (I had long believed that the spinning of the inner core was the primary contributor but given a surface-level skim of the literature that doesn't seem to be the case; convection seems to be more of a driver.)

I think perhaps many otherwise similar planets don't have a liquid iron core, so they may not have the strong radiation belts that shield life from the solar wind. Of course I am not sure what fraction of otherwise-similar planets have liquid iron cores, but Mars for example does not seem to. It is probably a function of the size of a planet (governing the pressure distribution in the interior), the ratio of iron to other elements, the temperature field (a function of the amount of radiogenic elements in the planet and its age), and perhaps other factors. Other planets may not be hot enough to have a liquid iron core at the right pressures, or be too massive (too much pressure) at the right temperatures, etc.


The composition of a planet's atmosphere has to do with the RMS velocity of gas molecules at a given planetary temperature. When this velocity exceeds the escape velocity of the planet, that gas is lost to space.

But there is one more factor. In the absence of a magnetic field, gas molecules can dissociate from being hit with the particles from the solar wind. E.g., water can dissociate into oxygen and hydrogen, and hydrogen having a relatively high RMS velocity readily leaks out to space. The remaining oxygen is too reactive to remain and then forms carbonates in rocks and carbon dioxide in the atmosphere. This is, from what I read, the explanation for the atmospheres of both Mars and Venus, which have only a small to non-existent magnetic field.

So yes, a magnetic field seems to be essential to holding a life-friendly atmosphere.


I think that solar radiation isn't a direct danger to life, as it is quickly blocked by the surface of oceans and land. If an atmosphere turns out to be a major factor in the development of life, lack of a field could be a bigger impact. That said, atmospheric stripping like what happened to marks isnt sure bet. Venus has no internal Dynamo, but a massive atmosphere, despite 5X the solar radiation.


I've read that the relative contribution of planetary magnetic fields is overstated relative to atmospheres. The thickness of Earth's is the same mass as a 10 meter-high column of liquid water; not much radiation gets through that much shielding, magnetic field or no. (I think it's solely muons?)


I'm pretty sure that the belts are a requirement for some types of life to originate and survive. Along with Jupiter helping protect us, our location in the galaxy, etc.


If abiogenesis occurred in the thermal vents deep under the ocean I believe that could have happened without the radiation protection as the water would be more than adequate.


Sure, but small amounts of radiation are beneficial. And those early organisms would eventually ahve to move to the shallows and land and deal with all the masked radiation at some point. It's all speculation, we really have no idea whether it was vents-first or not.


> And those early organisms would eventually [have] to move to the shallows and land and deal with all the masked radiation at some point.

Do they, though? Why is land the requirement? What's keeping life from, say, evolving to live deep underground? Or in the deep ocean? Both those places are heavily shielded from radiation, and organisms there wouldn't be affected much at all by not having a magnetosphere. Extremophiles on Earth get by just fine hanging around thermal vents, for instance. (Edit: this was mentioned above and I didn't see it - sorry for repeating.)

I think part of the problem with the Fermi paradox is that our base assumptions about what life needs are possibly a bit off. Maybe the fact that we have what we have is, well, quirky, and the fact that we evolved as living creatures that crawl around on the outside of our planet and need really fussy little temperatures to survive is just plain weird in comparison to the rest of the universe.

"Life as we know it" is a lot tougher criterion to meet than "life," I suspect.


Life may be abundant. Intelligent life with technological civilizations is probably not. It took 4 billion years on earth. That’s 1/3 the age of the universe.


These are all fair questions, and to go further, life may not even have required light at all- there are chemoautotrophs living deep in the rock that never see light.

I was going to say "obviously, nothing I said above would apply to life as we don't know it, like on the surface of a neutron star".


Earth is the only rocky planet in the solar system with a magnetic core so that'd be 1/4 at least.


No, because the amount of money you have in a bank account accumulates linearly, so you can only pay up to what you have put in. With insurance, you can get a payout more than what you have contributed up to that point, which is necessary for covering catastrophic damages.


But in this hypothetical the insurance company has perfect information, so they won’t sell you that policy that has to be paid out for more than you’ve contributed.

It’s just a thought experiment, but the more information they have on us, the more relevant it becomes.


They can’t predict the future, they can’t predict exactly when you’ll get in an accident.

Perfect information means they know your risk level to the best possible accuracy, which would really only apply to populations.

Perfect information means they insure 1000 people and predict they’ll have one bad accident per year. After ten years they covered for ten accidents. All ten could have occurred in the first year and they would still be correct.


> They can’t predict the future, they can’t predict exactly when you’ll get in an accident.

That’s why it’s a thought experiment, and not real life.

> Perfect information means…

No, that’s not what was meant by perfect information in this instance.


Then it’s a boring thought experiment. It hardly deserves the name “experiment”.


Sure. I can’t speak for Scoundreller, but I don’t think it was meant to be particularly interesting, just pointing out to wbl that if insurance was to become perfectly fair, it will also have become pointless.


There’s a pretty big leap between perfectly fair and having perfect knowledge of the future. You can know that a fair coin gives exactly 50% chance to flip heads or tails without knowing what the outcome of the flip is going to be.


The original context was unsafe drivers, with the gist of the response being that when you have everyone paying for exactly the costs they themselves incur, insurance has become meaningless.

It’s hyperbole of sorts, but it highlights that until such a time, raising the cost of insurance doesn’t just punish the people who actually cause the damage.


Except insurance also covers for costs that are not your fault or not anyone’s fault. An insurance premium could be divided into two components: one based on individual risk and the other component based on no-fault risk that applies to pretty much everyone equally. How are you going to get bad drivers to pay for hail damage? How are you going to get bad drivers to pay for a tree falling on your car? How are you going to get bad drivers to pay for an accident caused by a random tire blowout?

The personal risk component can be accounted by “perfect” information and that component can get bigger or smaller depending on your definition of perfect, but there’s another component which can’t.


With interest, the growth is superlinear


I've tought myself a lot of things over the course of my life and am a huge proponent of self-education, but a lot of the 'learning how to learn' had to happen in graduate school. There are few environments that provide the right combination of time, close involvement of experts and peers, the latitude to direct your research in a way that you find interesting and useful within the larger constraints of a project, the positive and negative feedback systems, the financial resources from grant funding, etc.

The negative feedback loops are particularly hard to set up by yourself. At some point if you're going to be at the researcher level (construed broadly), you need help from others in developing sufficient dept, rigor and self-criticality. Others can poke holes in your thoughts with an ease that you probably can't muster on your own initially; after you've been through this a number of times you learn your weaknesses and can go through the process more easily. Similarly, the process of preparing for comprehensive exams in a PhD (or medical boards or whatever) is extremely helpful, but not something most people would do by themselves--the motivation to know a field very broadly and deeply, so you can explain all of this on the spot in front of 5 inquisitors, is given a big boost by the consequences of failure, which are not present in the local library.

The time is also a hard part. There are relatively few people with the resources to devote most of their time for learning outside of the classroom. I spent approximately 12,000 hours on my PhD (yes some fraction of that was looking at failblog while hungover etc. but not much). You could string that along at 10 hours a week, 50 weeks a year, which is a 'serious hobby', but it would take you 24 years. How much of the first year are you going to remember 24 years later? How will the field have changed?


Based on the location and focal mechanism of the earthquake (https://earthquake.usgs.gov/earthquakes/eventpage/nc75095651...), this is a strike-slip earthquake on the plate boundary between the Pacific and Gorda/Juan de Fuca plates. Strike-slip earthquakes occur when two plates slide beside each other during an earthquake, usually along a steeply-dipping if not vertical fault. These kinds of earthquakes almost never produce damaging (or even really noticeable) tsunamis because there is no real displacement of sea water by seafloor movement, unlike a thrust or subduction zone earthquake.

The USGS's automated systems calculate the location and focal mechanism/moment tensor pretty much instantly from the seismic network. The system should know that a significant tsunami is unlikely based on the parameters of the earthquake. On the one hand, it's good to be cautious, but on the other hand, a system designed to cry wolf is also self-undermining. Maybe they should have a tiered warning system?


Doesn't any earthquake, regardless of fault type increase the immediate risk of a submarine landslide?

There are many steep canyons on the Pacific coast, and here is just one example of mass casualties from a tsunami resulting from a submarine landslide triggered by a strike-slip fault earthquake:

Caltech, 2018[1]: "Contrary to Previous Belief, Strike-Slip Faults Can Generate Large Tsunamis"

[1] https://www.caltech.edu/about/news/contrary-to-previous-beli...


Yes, the probability of tsunamogenic landslides do increase during earthquakes, but it's still quite unlikely for an event of this magnitude tens of km from the continental slope; this is why a properly-calibrated tiered system would be better.

The reason that the Palu event is so notable is precisely because it's uncommon. It's also a very different system: the causative fault is running along the axis of a shallow bay that is only a few km across, so even if the landslide did occur, rapid movement of the steep, shallow coastlines would surely have generated a smaller tsunami. It's a geographical and tectonic situation in which at least a minor tsunami is expected a priori conditional upon an earthquake, so a warning system would account for that in principle. (In practice there isn't time enough to mobilize because the tsunami hits while the ground is still shaking). The bay at Palu is like a somewhat larger Tomales bay--an earthquake right there is going to make some waves. Very different situation than one far off shore.


> Yes, the probability of tsunamogenic landslides do increase during earthquakes, but it's still quite unlikely for an event of this magnitude tens of km from the continental slope; this is why a properly-calibrated tiered system would be better

There is a tiered system, its calibrated based on a combination of magnitude and warning time for the initial alerts (updated notices are based on other measurements and observations, but if gathering and analyzing observations before an initial warning doesn't leave time to act on it, it doesn't matter how accurate the warning is.)


Those can even happen in bodies of fresh water. There's evidence of one at Lake Tahoe discovered by robot submersibles.


I've been subscribing to tsunami warning system emails since the mid or late 2000s. They send the first email about the earthquake as a warning that something happened. Then after ,if a tsunami isn't detected they send an email saying that. If there is a tsunami they will send the first warning and as soon as sensors and satellites start to track the wave they will update at intervals with a table of expected arrival times and magnitude or height. So, yes, they send a warning that something happened, then they send information if there is a threat.

Here is an example of the first message sent 9 minutes after 2011 Tōhoku earthquake https://imgur.com/a/1mwAKqc.


> The USGS's automated systems calculate the location and focal mechanism/moment tensor pretty much instantly from the seismic network.

According to a USGS guy on the news just now, this isn't true. They know the location, and the magnitude, but the moment tensor takes time. Therefore any ocean earthquake 7.0 or above triggers an immediate tsunami warning.


Looking at https://tsunami.gov/, it seems like they do have a 4 tier system, but they jumped straight to the highest tier in this case?


HN has good SnR generally, but I would default to trusting their automated system more than Random Internet Guy. Even if the warning gets canceled after measurements become available.


I'm a Random Internet Guy who is a professional in the field (earthquake hazards, not tsunamis in particular).


You definitely sound like it. But man, I've met some convincing liars online so I try to be cautious when someone makes claims and I have no proof that they are who they claim to be (especially when they didn't make that claim explicitly, and just sound very intelligent).

It's a complication that will never happen, but sometimes I think it would be cool if HN had a way of authenticating experts and giving them flair. So many legit smart people here.


Based on the travel time map, and that the earthquake event was just 45 mi SW of Eureka, CA (and potentially closer to the coastline elsewhere), it seems that jumping straight to Tsunami Warning is the most appropriate messaging, given that the expected time to impact is quite short?

https://www.tsunami.gov/events/PAAQ/2024/12/05/so1aq0/1/WEAK...

(Some of my job requires me to deal with natural disaster public warnings; but not tsunami specifically)

(I'm late to the party. The warning has since been cancelled: https://www.nbcbayarea.com/news/california-earthquakes-tsuna...)


It's a three-tier system, I was confused when I was looking at it. The fourth item is "threat" which you would think is higher than "warning", but "threat" is only used outside of the US.


The system is training me to ignore it already. I’m in SF and we had a flash flood emergency alert. I never heard of or saw any floods. I could believe a street or two might have had a few inches of water at most. But honestly I’d bet against even that.

And then there’s this tsunami alert today.


> I never heard of or saw any floods.

There was a ton of flooding on flat roads and highways during the last week+ long storm session. I saw several lanes impassable on 101, and several spots in SF where a car could easily have gotten flooded.

All the alerts I got were basically "please don't drive" and not "you're gonna die!", which I think is totally reasonable.


SF topography means some places like the Mission and Dogpatch can have severe floods and the rest be fine.


[flagged]


Google some tsumani simulations of the west coast. Prepare to be surprised.


I really doubt you know what you're talking about.


Flash floods depend on your elevation.

I've gotten the warning and my street is perfectly fine... and then I look at social media and cars on the street are half-submerged just 20 blocks away.

You might not even be aware of elevation differences when they're gradual.


It is also unclear to me how someone is supposed to differentiate a real emergency from an "Extreme threat/danger" and what authority they should look to, besides their common sense.

I guess people can go on twitter and read some random posts.


Flash flood alerts are one of the few that I don't get annoyed about seeing. A big rain up in the mountains can result in a huge chunk of water somewhere downstream a couple of hours later. This significant displacement of time and space between cause and effect warrants caution and notification.


One superfluous tsunami warning after an outlier 7.0 earthquake, and already "the system is training [you] to ignore it"?


This is similar to the "severe weather alert" I just received on my phone when the temperature will range from 47' to 67' F (8' to 19' C) in Los Angeles today, December 5, with clear, sunny skies and no noticeable winds.

Of course, when I tap on the notification and open the app, I see that it's actually driven by an air quality alert because the AQI will be 112 (which isn't even that high.)

Come on guys - the dictionary defines weather as, "the state of the atmosphere with respect to heat or cold, wetness or dryness, calm or storm, clearness or cloudiness."


Also confusing when the SF Fire captain is on the radio telling people to evacuate to 100 ft above sea level right after a CalTech seismologist says it is unlikely to cause much of a tsunami due to being a strike-slip earthquake.


Totally off topic, but is your name a reference to the Cossatot River in Arkansas? If so, fantastic choice.


Yes, it's a big one in my formative years as a kayaker. I've used the handle in a lot of places and I think you're the first person that's recognized it.


It's space in the SV context, i.e. the competitive arena or market. Americans dominate the space space.


What, the Vietnam war?


There is a trade off between the half life and the intensity of radiation (i.e. the number of particle emissions per unit time), correct? So even if waste products are radioactive for thousand of years, they can be more easily handled than materials with a faster decay rate, even if they need to be stored for longer.


Not if total flux at the beginning is still higher - and some elements will get very high flux.


The actual journal article is here (open access): https://www.cambridge.org/core/journals/antiquity/article/ru...

Based on the images, I think that the largest structure is about here: 18.891548°N, -89.323622°E. But you can't see anything in google earth (otherwise why would he have had to traverse 16 pages of google search results).


It was assumed (correctly) by not long after Cowper's time that the strata were of different ages because of the different fossil assemblages in them, but of course no one had any data-derived numbers to put on the different eras.

Furthermore, there is abundant evidence for erosion between many of the layers in the Grand Canyon, and they don't look anything like flood deposits, which are generally chaotic (unsorted, discontinuous bedding, etc) because of the high energy in the environment during deposition. Paraconformities indicate a cessation of deposition, which is often accompanied by erosion. They are 'para' conformities not because of the gap in time between the layers, but because there wasn't major deformation of the Earth's crust during that time (this means substantial tectonic activity), which would cause regional tilting of the lower (older) rocks. Throughout much of the middle of the country, there are young sediments deposited in a paraconformable relationship on top of rocks that are 400 million years old (making up the surficial bedrock of the region), because there hasn't been major tectonic activity in the region since those 400 million year old rocks were deposited (and indeed, for close to a billion years before that in much of the midcontinent).


Doyne Farmer's group at Oxford does 'agent-based' economics simulations in this vein. He has a new book called 'Making Sense of Chaos' that describes it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: