Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When we're discussing relatively small changes in temperature over very long periods of time, I wonder how confident we are in those measurements? I mean a very regular trend like this is a pretty strong signal that you're at least not overwhelmed by random noise, but: I find it very hard to believe that our ability to manufacture very precise and accurate thermometers has been so consistently good (measuring hundredths of degrees) for the order of decades. Am I wrong? I often wonder how much this impacts climate change studies as well. Ocean acidification just "feels" like much sounder science to me because the effects are so much more visible, but I'm not certain about our ability to measure pH so precisely for decades either.


Due to the way statistics work, you don't need to be able to measure hundredths of degree to detect a hundredth of degree average change.

Because the error will distribute around the true value, thus even with sampling errors you can still extract a signal, even a very weak one, if you have many samples.


It depends on the error. It could be that older thermometers were systematically biased.


Could be.

But surely we have some which survived, or can manufacture using methods of the time to check.

Also, physics was in a pretty good state at the time, at least regarding temperature/length measurement. Surely the scientists of the time measured the bias.


Measuring the bias would require knowing the true temperature.


They accounted for that possibility:

> One possible reason for the lower temperature estimates today than in the past is the difference in thermometers or methods of obtaining temperature. To minimize these biases, we examined changes in body temperature by birth decade within each cohort under the assumption that the method of thermometry would not be biased on birth year.

I think that the assumption that the method of thermometry is not biased based upon birth year is probably correct.


> I wonder how confident we are in those measurements?

Very.

Thermometer calibration occurs around known values. Like the melting point of ice and boiling point of water at STP. It’s possible for individual measurements to be off significantly, but long term trends like this cover a huge range of manufacturers so the calibration points would need to be shifting around.


If you measure the temperature of boiling water, it can be noticeably different from 100 degrees depending on the weather and location. The confidence isn't coming through to me still.

I'll buy that equipment exists that can measure temperature extremely accurately, maybe we even base it on the Plank constant now too. But I don't buy for a minute that most thermometers, or anything close to that, are even on the same playing field. Nevermind thermometers from decades ago.


He said “STP” or “standard temperature and pressure”, which controls for the things that affect the measurement changing.


Even if the effects were not properly accounted for in the calibration of thermometers (which I doubt, in general), it would not likely cause a systematic error that shifts slowly over decades. The authors of the report have identified, and explored, other, more plausible souces of such systematic errors, such as in how the measurements are performed.


Do you not own a barometer? Pressure changes by the hour.


Indeed it does, but I cannot see how that could possibly refute the point I am making. How could that cause a systematic error that shifts slowly over decades?


FWIW quartz thermometers achieved accuracies better than 0.05 K in the 60s, with resolution better than 1 mK.


Boiling water is at 100C at standard pressure full stop. It is not affected by weather (humidity or air temperature). It is not affected by location unless your change of location involves a change in altitude which accompanying pressure change.


Air pressure changes with weather. It's easily visible in old school barometers like http://www.4physics.com/phy_demo/pressure/baro_exp1.jpg

It varies in a range of about +5% to -5% of average pressure.


Which was known about at the time. The ideal gas law for example dates back to 1834 or 27 years before the oldest measurements in this dataset. But even that was not needed with Barometer’s dating back to 1643 and vacuum pumps originating in 1650 allowed for pressure calibration making the current weather unimportant.

Anyway, I think people are vastly underestimating just how far scientists where calibrating things back then. Clock making for example cares deeply about the current temperature.


>Ocean acidification just "feels" like much sounder science to me because the effects are so much more visible, but I'm not certain about our ability to measure pH so precisely for decades either.

One additional caveat specific to ocean (or other large scale systems) property measurement is that, because of the scale, your data can be easily biased because your sensors are not uniformly or even globally deployed. That's increasingly less of a problem in earth science but for geologic data, both in time and space, we are quite liberal in interpolating/extrapolating based on very impactful and sometimes shakey assumptions. This is dangerous because it is very easy for bias from dogma to sneak in unintentionally and undetectably to produce results which may be far removed from truth.


also because sensor deployment density per region varies in time, so measurements across time aren't readily comparable


They do a few checks for this in the article. For example, one of their subsamples is Union Army veterans, and they detect the same trend within those cohorts (0.03 C lower per decade), which used the same instrumentation, as when comparing across the subsamples (e.g., Union Army vs today).


I don't think there is any claim that the same instrumentation is used for the Union Army cohort -- even basic details like whether the temperature is taken orally or axially: "Whether the temperatures were taken orally or in the axilla is unknown; both methods were employed in the 19th century although oral temperature was more common".


That's true, but they do make the more abstract assumption that any bias isn't associated with birth date within cohort. I think the idea is that if there were bias due to instrumentation, it would have to be somehow systematically related to birth date within the cohort.

This is possible if there were some systematic shift in instrumentation with measurement year, but it would be blunted by the extent to which different ages were being sampled at each year.

I agree though that it would be nice to have some information on instrumentation over time and how it relates to temperature measurement.

It's worth noting that the results have practical implications regardless of the explanation of the observed trends, though, which is it's commonly assumed normal body temperature has certain characteristics, without recognizing that that might have shifted in significant ways over time.

It's interesting to me personally because I've often observed my body temp when I'm feeling fine is usually just above 36 C. That doesn't sound like much but it becomes a bigger deal when deciding whether or not I have a significant fever. Some of that might just be individual variation, but this is suggesting there's cohort trends within that too.


They mention that by data analysis they can detect higher temperatures later in the day, varying by 0.02 or 0.01 C in two of their three data sets and this was an expected effect. My intuition would be if the data is good enough to detect small patterns you expect, that is an indication it is reliable enough when telling you about unexpected patterns.


https://en.m.wikipedia.org/wiki/Accuracy_and_precision

You can have a very precise thermometer (repeated measures are tightly clustered) that isn’t accurate (deviates from true value).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: