Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Summarized answer from the article.

> For JPL's highest accuracy calculations, which are for interplanetary navigation, we use 3.141592653589793

> by cutting pi off at the 15th decimal point… our calculated circumference of the 25 billion mile diameter circle would be wrong by 1.5 inches.

The author also has a fun explanation that you don’t need many more digits to reduce the error to the width of a hydrogen atom… at the scale of the visible universe!



It's a good metric to determine how advanced a civilization it. Would be cool to just compare pis with the aliens, and then whoever has the longest pi takes over, rather than fighting to extinction.


Is this some kind of a pi-nus measuring contest joke?

Yeah, yeah, go ahead and downvote this one to death. I know we don't like jokes 'round these parts, especially low-effort immature ones. :~(


That was neither low-effort nor immature. The elusive high quality, mature penis joke is well appreciated.


HN pinoia aside, it was a pretty good one.

However I doubt that a civilization that has survived long enough to invent some (local) sort successful space travel would budge.

There are two extremes that would bring balance: mutually assured destruction (but the power comparison must allow for a delicate balance of terror to be believable on both sides); or a mutually beneficial alliance (which can work with a well-meaning advanced civ encountering a less progressed one - the “there there’re, little one” case).

That being said, I’m not convinced that searching for universal others isn’t a dead end. But even if it is, it sure can stretch our understanding a bit.

E.g. look at the Kardashev scale, with which we can sort of stretch imagination and think of Dysom spheres instead of solar panels.

I mean, even if we don’t find aliens, with a roadmap like Kardashev’s it won’t be (too) long before we become the aliens many of us hope for/aspire to.


The joke was good, but your fear of downvotes and attempt to prevent them by adding the disclaimer is :|


This is why we can't have nice things (because of me). Sorry!


My inner 12 year old laffed.


Well done for a 10 year old!


The joke was great. The disclaimer at the end ruins it.


Well, it's just the way HN PTSD plays out.


Don’t worry about your internet points


I don't, and totally agree; they're absolutely worthless. I do however get annoyed when decent comments get murdered and become invisible to most of the population.


Your fear is irrational...


It's not how long your pi is, it's the circumference that counts.


... and the emcee asks for the fifty zillionth hex digit, but only Team HOOMINZ has the formula for an arbitrary hex digit in isolation of pi !


Calculating the circumference of a circle isn't the only thing pi is used for. And small errors at the start of a calculation can become big errors at the end. So I don't find this argument very convincing.


That's why they are using 15 decimal places, which in reality, is complete overkill. No instrument I am aware of is capable of measure with such accuracy, top of the line is usually at around 9 decimal places. This is a scale at which relativistic and quantum effects have to be considered.

That pi is 6 orders of magnitude more precise. The nice thing about having 6 and not just 1 or 2 (that would be sufficient) is that you don't have to worry too much about the exponential effect of compound error.

So really 15 decimal places is enough not to worry about pi not adding significant imprecision to your calculation, but not so ridiculous as to waste most of your time processing what is essentially random digits.

That it roughly corresponds to the precision of IEEE754 double precision floating-point numbers is probably no coincidence. This is maths that standard hardware can do really well. More than that requires software emulation (slow) or specialized hardware (expensive).


I love the fact that some random dude on HN is telling NASA that their calculations regarding space calculations are not very convincing. Internet can be a beautiful place.


Debate shouldn't be discouraged purely out of deference to the agreed upon authority. Sometimes the random voice in the crowd can say something important.

Probably not in most cases, but this isn't the sort of place we shout people down just for disagreeing. If you disagree with them, present your reasoning, and not just "they're NASA so they must be right!".


There is a difference between "NASA/JPL are doing something wrong" and "NASA/JPL's explanation to a middle-schooler of why they are doing something has an error".


The argument saying we don’t need more than N pi decimals because we could compute the radius of the universe down to an hydrogen atom is indeed not very convincing.

This is also unrelated to NASA’s past or present activities


But the question was answered by NASA and how many decimal places THEY need for THEIR highest level calculations.

If a HN user requires more, as for example they are planning to travel further than Voyager 1 then you’re absolutely right, it’s not very convincing to narrow it down same as NASA had.


> So I don't find this argument very convincing.

You think the NASA JPL is mistaken about how accurate they need Pi to be?


Saying "I don't find the argument the authority gives convincing" is a different statement than "I distrust the authority". And in fact there are many experts, from craftsmanship over engineering even all the way to science that demonstratebly know method to achieve success, but are missing methods to verify their explanatory models or don't really need to care. E.g. for bakeries the microbiological background is often much less relevant than getting the process right.

In the case of this explanation by JPL, they are giving a very dumbed down explanation to visualize the extreme precision of floats to layperson. By necessity it is very incomplete and fails to transmit a deeper understanding to those of us that have an at least passing understanding of numerically analysis. For me that means I want to know more, as there is certainly important nuance missing, and I'd want to know more from the same experts at JPL exactly because I trust their expertise.


Hopefully JPL will finally be able to accomplish something once they start taking advice from internet randos.


The impact of a higher precision in pi depends on the rest of the calculations or simulations; factors like the (roundoff) errors caused by the size of your floats and your other constants, the precision of your calculations (like your sine), or (roundoff) errors in your differential equations and timestep accumulations. And finally, you have uncertainties in the measurements of the world (starting conditions) you use for your simulations. I guess in NASA's case, a higher precision in pi doesn't add to the overall performance of their calculations, or at least not to a relevant one.


But all measurements of weight, length, position/speed have errors multiple orders larger. Errors by second and third approximations will dominate. Let alone unpredictable (unknown) physics playing a role.


Well then the problem isn’t Pi and any number of decimal places.


This calculation demonstrates how our current 64-bit FP operations are wide enough for almost all physical world needs. But to make the point even clearer: in one 2^-64th of a second, an object moving at the speed of light would not cross the diameter of a hydrogen atom.

c in 2^64 = 1.625 × 10-11 m/s; width of a hydrogen atom: 2.50 ^10-11 m


if memory serves, the problem with ieee754 fp representation isn't the relative sizes of its largest and smallest possible values, but its uneven representation of the values between


That's an inevitability of the word size, not a fault. Try finding a representation with a fixed length that doesn't.

Edit: that's not quite right, for a limited scale, fixed point will do, but if you need wider range than can be directly represented as fixed point, something has to give. Machine floats aren't pretty things, we have to live with it.


256 bit integers measuring Planck units. Assuming the universe itself is actually distributed evenly and doesn't blur possible 4-positions in some regions.


That's a very interesting point and perhaps worth expanding on (although what's a '4-position'?) but tangential to mine which was purely about general value representations that have to be constrained to finite.


4-position is a location in both space (3) and time (1). I don't understand the maths of general relativity well enough to give a deeper description than that, but it seems like the kind of topic that might break the assumption of space/time being evenly distributed everywhere.


The Planck units are not good natural units, but any good system of natural units will result in very large and very small values (i.e. in the range 10^10 to 10^50 or their reciprocal values) for most physical quantities describing properties of things close in size to a human.

Therefore double precision, which accepts values even over 10^300, is good enough to store any values measured with natural units, while single precision (range only up to around 10^38) would be overflown by many values measured with natural units, and overflow would be even more likely in intermediate values of computations, e.g. products or ratios.

For those not familiar with the term, a system of natural units for the physical quantities is one that attempts to eliminate as many as possible of the so-called universal constants, which appear in the relationships between the physical quantities only as a consequence of choosing arbitrary units to measure some of them.

While the Planck units form one of the most notorious systems of natural units, the Planck units are the worst imaginable system of units and they will never be useful for anything. The reason is that the Newtonian constant of gravity can be measured only with an extremely poor uncertainty in comparison with any other kind of precise measurement.

Because of that, if the Newtonian constant of gravity is forced to have the exact value 1, as it is done in the system of Planck units, then the uncertainty of its measurement becomes an absolute uncertainty of all other measured values, for any physical quantities.

The result is that when the Planck units are used, the only precise values are the ratios of values of the same physical quantity, e.g. the ratio between the lengths of 2 objects, but the absolute values of any physical quantity, e.g. the length of an object, have an unacceptably high uncertainty.

There are many other possible choices that lead to natural systems of units, which, unlike the Planck units, can simplify symbolic theoretical work or improve the accuracy of numeric simulations, but the International System of Units is too entrenched to be replaced in most applications.

All the good choices for natural units have 2 remaining "universal constants", which must be measured experimentally. One such "universal constant" must describe the strength of the gravitational interaction, i.e. it must be either the Newtonian constant of gravity, or another constant equivalent to it.

The second "universal constant" must describe the strength of the electromagnetic interaction. There are many possible choices for that "universal constant", depending on which relationships from electromagnetism are desired to not contain any constant. The possible choices are partitioned in 2 groups, in one group the velocity of light in vacuum is chosen to be exactly one (or another constant related to the velocity of light is defined to be 1), which results in a natural system of units more similar to the International System of units, while in the second group of choices some constant related to the Coulomb electrostatic constant is chosen to be exactly 1, in which case the velocity of light in vacuum becomes an experimentally measured constant that describes the strength of the electromagnetic interaction (and the unit of velocity is e.g. the speed of an electron in the fundamental state of a hydrogenoid atom).

I have experimented with several systems of natural units and, in my opinion, the best for practical applications, i.e. which lead to the simplest formulas for the more important physical relationships, are those in which the Coulomb law does not include "universal constants" and the speed of light is a constant measured experimentally, i.e. the opposite choice to the choice made in the International System of Units.

The Planck units are always suggested only by people who have never tried to use them.

The choice from the International System of Units, to have the speed of light as a defined constant while many other "universal constants" must be measured, was not determined by any reasons having anything to do with what is more appropriate for modern technology.

This choice is a consequence of a controversy from the 19th century, between physicists who supported the use of the so called "electrostatic units" and the physicists who supported the use of the so called "electromagnetic units". Eventually the latter prevailed (which caused the ampere to be a base unit in the older versions of the SI, instead of the coulomb), because with the technology of the 19th century it was easier to compare a weight with the force between 2 conductors passing a fixed current than to compare a weight with the force between 2 conductors carrying a fixed electrical charge. There is a long history about how SI evolved during the last century, but the original choice of the "electromagnetic units" instead of the "electrostatic units" made SI more compatible with the later successive changes in the meter definition, which eventually resulted in the speed of light being a defined constant, not a measured constant.

Nowadays that does not matter any more, but few people remember how the current system has been established and most who have grown learning the International System of Units have the wrong impression that having an exact value for the speed of light is somehow more "natural" than having for it an experimentally measured value.

The truth is that there are many systems of natural units, and each of them is exactly as natural as any other of them, because all have a single experimentally measured electromagnetic constant. When the velocity of light is removed from some equations, an equivalent "universal constant" is introduced in other equations, so which choice is best depends on which equations are more frequently used in applications.


Double precision floating point gives you 53 bits of significance.


Not to mention that your error grows with any mathematical operations you perform and all sorts of other numerical precision issues.


This happens for integer arithmetic as well, as soon as you step off the happy path of trivial computations and onto the things we use floating-point for. You cannot exactly solve most differential equations, even if your underlying arithmetic is exact. These errors (“local truncation error”) then carry through subsequent steps of the solution, and may be magnified by the stability characteristics of the problem.


It's even more absurd than that:

Anything shorter than about 10^-43 sec is faster than light can travel a Planck length.


According to Wikipedia LIGO is detecting gravitation waves was small as 10^-22 metres.


Double represents values smaller than 10^-300. 10^-22 is no problem, so long as you don’t need more than about 16 digits after the first non-zero.


64bit floating point IEEE754 uses 10bits for exponent, and one bit for the sign. It's not that all 64bits are dedicated for value representation.

If you meant that 64bit long is a rather large 20decimals, indeed it is.


2^-64 has little to nothing to do with 64-bit FP operations and is accurately represented in a 32-bit or even 16-bit float.


If pi is truly infinite wouldn’t it eventually express a sequence of information which would be self aware if expressed in binary in a programmatic system?


My understanding (which might be wrong) is that just because PI is infinite and non-repeating, doesn't necessarily mean that every conceivable pattern of digits is present.

As a contrived example, consider the pattern:

01 001 0001 00001 etc.

This pattern is infinite and never repeats but we will never see two consecutive "1"s next to each other.


Yes, it doesn't necessarily follow, but it is indeed conjectured that pi is a normal number, meaning all digits appear with the same frequency, but it is not known yet. https://en.wikipedia.org/wiki/Normal_number


The same frequency does not imply every subsequence appears. Consider the modification which rewrites every sequence of 123 to 132. All digits will have the same frequency but 123 will never appear.


If pi is shown to be normal in every base then every finite sequence must appear in it.


You haven't read the link you posted though. Every digit appearing with the same frequency means a number is simply normal and it is not enough to get you what you want in this case (as pointed out by sibling comment). Normal number is a number where every possible string of length n has the same frequency of 10^(-n)


No, you haven't read the link he posted. https://en.wikipedia.org/wiki/Normal_number#Definitions: "A disjunctive sequence is a sequence in which every finite string appears. A normal sequence is disjunctive". If Pi is normal, then it is also disjunctive.


When I was at university, one of the senior number theory professors allegedly said during a tutorial that he accepts the normality of pi on the basis of "proof by why the hell wouldn't it be". With tongue in cheek, of course.


The difference might be:

For your example there is an algorithm to describe the sequence of digits and for Pi there isn't.

EDIT + Clarification: There is an algorithm to calculate the digits of your number without calculating all previous digits. But for pi there isn't.


>There is an algorithm to calculate the digits of your number without calculating all previous digits. But for pi there isn't.

Actually, there is: https://math.hmc.edu/funfacts/finding-the-n-th-digit-of-pi/


There is one, how do you think we compute digits of pi.


So here's a bet:

I give you the 10^100th digit of the above algorithm and you give me the 10^100th digit of pi.

Whoever fails owes the other side 10 BTC.


There is an algorithm to get the nth digit of pi. It's just that it does not run in constant time


I kind of addressed this here: https://news.ycombinator.com/item?id=31966228


If your argument is "These algorithms have differing degrees of computational complexity" then that doesn't actually demonstrate that one can't be algorithmically determined


What I meant is:

Describe the n-th digit of an irrational number without calculating all previous positions of the number.

If pi were a sequence of digits, there is no algorithm to calculate it other than by calculating pi but there is one for op's number. The very fact that he could show the algorithm for creating the sequence of numbers in his post is indicative of that.

For pi such an algorithm doesn't exist (other than calculating pi itself).

I wanted to emphasize this by talking about the "sequence of digits" in my original reply but apparently I failed at explaining this well.


Various algorithms to compute the n-th digit of pi exist, eg https://bellard.org/pi/pi_n2/pi_n2.html.


I can't really tell to what extent you're not computing previous digits (or doing work that could quickly be used to come up with these previous digits) with this algorithm but O(n^2) seems quite heavy compared to O(1) (I expect) to get the n'th digit of op's number.

Maybe I should rephrase it:

My assumption is: If there is an O(1) algorithm to determine the n-th digit of an irrational number x then the number is still "of a different class" than the likes of pi and there OP might not be able to induce things from this "lesser class of irrational numbers"

However, it's just an intuition


How could it possibly be O(1)? That doesn't even give you time to read every bit of the input number.


Why, how is that related to the existence of an algorithm?


I narrowed down "algorithm" to a specific sort of algorithm in the original reply.


Ok! I'll let you know when I'm finished calculating - hope you're still alive by then


We know for a fact that pi is truly infinite, there's no "if" there. But we are not sure whether it contains every sequence of (e.g.) decimal digits.

Either way, your proposition works for "the list (or concatenation) of all positive integers in ascending order" as well. There is no deep insight in it, even if it were also true for pi.


...pi isn't infinite, though. It is a finite number; not even a particularly large one - its value is between 3.1 and 3.2.


if you accept the premise behind this question (which I wouldn't dispute) then theoretically any information at all would be self aware given the right computer


Bold claim in terms of IT (there exists currently no self-aware system in IT) but of course it contains all the info needed to build a human.

I'd rather say it contains the code to generate itself which should be much easier (= earlier) to find.


What you want is a disjunctive number, also called rich number or universe number.

It is an infinite number where every possible sequence of digits is present, and therefore, such a number contains the code of a self aware program, as well as the complete description of our own universe (hence the name "universe number") and even the simulation that runs it, if such things exist.

We don't know if pi is a disjunctive number, for what we know, though unlikely, the decimal representation of pi may only have a finite number of zeroes. It means we don't have the answer to your question.


Sure, similar argument to a Boltzmann Brain.


I wonder why they don't just use the highest precision possible given whatever representation of numbers they're using? I know these extra digits would be unlikely to ever matter in practice, but why even bother truncating more than necessary by the hardware? (Or do they not use hardware to do arithmetic calculations?)


They do. This is precisely the number of accurate digits you get when you use a double (i.e. 64 bit floating point). https://float.exposed/0x400921fb54442d18


> The author also has a fun explanation that you don’t need many more digits to reduce the error to the width of a hydrogen atom… at the scale of the visible universe

How many more, though?

<Perfectionist>1.5in of error per few billion miles seems a bit sloppy, even though I'm sure it fits JPLs objectives just fine.</>


> our calculated circumference of the 25 billion mile diameter circle would be wrong by 1.5 inches

JPL uses imperial units?


JPL uses metric for calculations.

It’s an education article, and the author mentions he first got the question from (presumably American) students so it makes sense he would answer in imperial units that an American middle schooler could understand.


  > It’s an education article
Then it should use the actual units that the students will use for engineering and scientific calculations. Saying "it's education" is not an excuse to not teach.


  - hey space nerds, check out my new result
  - oh yeah what ya got math kid
  - new digits of Pi. Such fast, very precision!
  - not this shit again
  - it's so cool, *look at it*
  - tl;dr
  - but it's the key to the universe
  - ok ok, look we have to do actual space stuff
  - laugh now fools, while I grasp ultimate power




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: