Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Mathematical notation presumably mostly makes perfect sense to the kind of people who deal with mathematical notation all day long.

Maybe the overuse of opaque names leads to self-selection of who becomes a mathematician?

Single-letter non-descriptive variable and functions names would “make sense” to programmers who use it all day long too — but that alone doesn’t make it a good idea.



If your job consisted of calculating with sequences of changes of programs, with no copy paste available, you'd probably feel differently. For example, a fairly roundabout derivation of a change of base for logarithms (pretending we forget log(a^x) = x log(a) for arbitrary base):

We're trying to derive that a^x = b^(x * log_b(a)).

Or in verbose descriptive terms, oldBase `exponentiate` oldExponent = newBase `exponentiate` ( oldExponent `multiply` inverseExponentiationInNewBase (oldBase) ).

We could maybe argue about the verbosity and clarity of each statement, but I think math's notation shines when you compare derivations instead of statements:

      a^x
    = b^( log_b(a^x)       )
    = b^(    ln(a^x)/ln(b) )
    = b^(  x * ln(a)/ln(b) )
    = b^(  x * log_b(a)    )
or with descriptive names (apologies to readers on mobile):

      oldBase `exponentiate` oldExponent
    = newBase `exponentiate` ( inverseExponentiationInNewBase(oldBase `exponentiate` oldExponent)                                                )
    = newBase `exponentiate` (   inverseNaturalExponentiation(oldBase `exponentiate` oldExponent) `divide` inverseNaturalExponentiation(newBase) )
    = newBase `exponentiate` ( oldExponent  `multiply`  inverseNaturalExponentiation   (oldBase)  `divide` inverseNaturalExponentiation(newBase) )
    = newBase `exponentiate` ( oldExponent  `multiply`  inverseExponentiationInNewBase (oldBase)                                                 )
If I were doing it by hand, I know I'd screw it up with long stuff to copy. Remember accidentally dropping negative signs in school? Hell, I'm still not really sure I got the second version right. By hand, I'd also get super frustrated and impatient writing sooo much.

This is a ridiculous over the top example, since I'm avoiding using everyday symbols like * and / and ^ and log, but for working mathematicians and applied mathematicians, their notation is just as familiar to them and I'm sure it'd be equally annoying to write out descriptively as it is for us to write out basic math symbols descriptively.

Programming notations are to be written and read. Math notation is to be manipulated.


Every domain — every language — has its basic jargon, and algebra is no exception. I’m not suggesting that mathematicians simply replace every symbol with a word that hints at meaning; that’s too literal an interpretation of what I wrote. “exponentiate” is no more descriptive than “^”.

But there are alternative ways of describing that derivation that are not as symbol-manipulation heavy; you would certainly not communicate this proof in words to another mathematician — or to a non-mathematician — by simply reading your derivation (or your alternate, verbose derivation) symbol by symbol. Instead you would more likely rely on the meanings of the symbols.

Even in your verbose derivation, however, it’s worth noting that your new names capture the fact that b^x and log_b(x) are inverses, a key piece that someone unfamiliar with logarithms and their relationship to exponentiation now has a hope to understand.


I think most people who wanted to communicate this proof in person would say 'let's find a whiteboard, or do you have some paper'? And then proceed to write down the first version of the proof.

Because it is better communicated in notation than in words.


I'm both a programmer and a mathematician. Mathematics is written in English (or another human language) plus added symbols. The symbols are what we refer to as the notation. It's not comparable to programming language. Here the words are equal to the symbols.

Also, no better notation could help you understand most modern mathematics. There's no better choice of symbols or names that will help you understand Galois theory. You simply need to understand high-school algebra, then group theory, then Galois theory.


> There's no better choice of symbols or names that will help you understand Galois theory. You simply need to understand high-school algebra, then group theory, then Galois theory.

This probably isn't completely true—as will be clear to anyone, even a subject expert, who tries to go back and read the original papers. At least part of this is due to fads in notation—in my field, I can easily read the papers of people who do similar work to mine, and struggle with the papers of those who don't, even when they're talking about exactly the same thing—but some of it must be due to genuine improvement.


Yes, you're right. I should have said, there's no better choice of symbols that will remove the huge set of prerequisites you need to first understand. There isn't as large of a tower of abstractions to understand most codebases.


but put it the other way: can you contrive some extremely bad notation to make it harder to understand Galois theory?


Opaque names have the advantage that they don't trigger any potentially misleading associations and they emphasize the abstract nature of what is being discussed. A variable named x could be anything, even something that the mathematician/programmer didn't anticipate.


I agree with this. Additionally, I think a lot of detractors of single-letter variable names don't realize that it is often the case that the structure of the equation/expression is what's interesting, not what any individual variable represents. That structure is most visible and apparent when the variables don't take up any more space than the operators.

Being able to recognize things like "oh, that's a polynomial!" is extremely useful, especially in the context of learning new math rather than a more applied context. And I think it would be extremely hard to spot an opportunity to do integration by parts when working with Java-style verbose variable names.


>Single-letter non-descriptive variable and functions names would “make sense” to programmers who use it all day long too — but that alone doesn’t make it a good idea.

Imagine you would write the code on paper by hand and have to repeat some long var or function name all the time . Even with IDEs I see programming languages using shorter keywords like "var","const","int" then the full word.

Because you are agains single letter symbols let me know what are your suggestions for PI, or X(in polynomials or equations) and show me a screenshot on how some real world physics equation solving would look like with camel case named symbols.


Go back a few centuries and you get long form Latin instead. Or go back much further and you will find the quadratic formula described as taking the number that the second term is increased by, diminishing it by half and then.... . Basically an essay just to say -q/2.

Sure, you could write

-CoefficientInFrontOfXToTheOne/2

But then each row of your calculation will need line breaks and will be hard to understand. It will take forever to write and it will take forever to read. No thank you.


Mathematicians will never agree to write dblEulerConstant instead of e.


And, issues about the length of the name aside, we shouldn't! `e` isn't a double or any other standard numeric type. It is an infinite-precision number.


I know what you mean, and a number that is not infinitely precise is an interval (i.e. a set of numbers). e is a number that is quite hard to pin on the imaginary number line, because the chances of hitting it with a pin is virtually zero. But the same argument can be made about the number 2 or 3. Those are just notations to abstract ideas, even though it is easier to formulate analogies for some of those numbers.

Now, formalising that above sentence even with symbols is quite difficult.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: