Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Notation is what it is because it serves our ( ie. mathematicians') interest so well. If every one of us started writing Integral Of Quadratic Polynomial = Cubic Polynomial plus constant maybe we would increase the size of the audience a tad bit....but the downside would be, we'd move at a glacial pace & never make progress.

Things like direct product and cyclic groups are basic....almost trivial even.

If a non-programmer asks you "Why do you guys say int x = 10; float y = 0.2 Why not x is an whole number who value is 10 y is a fraction whose value is one fifth.

you can sit down & reason with him for a while....but if he insists that everything be spelled down in such great verbose detail, you will at some point, pick up your starbucks coffee and say "Dude, this programming thing, its not for you. The average program is like 10s of 1000s of LOC and if I start writing everything out in plain English, I'm going to get writers cramp & file for disability insurance."

Trust me, math gets immensely complicated very, very fast. The only way to have even a fighting chance of keeping up is terse notation ( and frequent breaks ).

One reason for this schism is the lack of rigor. eg. When a programmer says "function", he is order of magnitude less rigorous than what a mathematician means by that word. You ask a programmer what probability is, and he will say "you know, whether something will happen or not, how likely it is to happen, so if it doesn't happen we say 0, if it is sure to happen we say 1, otherwise its some number between 0 & 1. Then you have Bayes rule, random variable, distribution, blah blah...I can google it :))"

You ask a mathematician what probability is...even the most basic definition would be something like "a function from the sample space to the closed interval [0,1] ". Note how incredibly precise that is. By the word "function", the mathematician has told you that if you take the cross product of the domain ie. a Set of unique outcomes of your experiment, with the range, which is the closed interval [0,1], you'll get a shit-ton of tuples, and if you then filter out those tuples so that that every outcome from the domain has exactly one image in the range, then that is what we call "probability". And this is just the beginning...the more advanced the mathematician is, the more precise he'll get. I've seen hardcore hackers who've designed major systems that use numerical libraries walk out of a measure theory class on day 1, simply because they overestimate how little they know. Calling APIs is very, very different from doing math. The professor is like the compiler - he isn't going to care if you know or not what measure is or what a topological space is...its a given that you've done the work that's laid out in the pre-reqs, and if you haven't, go write an API or something, don't bother the mathematician....atleast that's the general attitude in most American universities I've seen. If you tell him "describe its purpose and give a full description" he will look at you as if you are from Mars, and then tell you to enroll in the undergraduate section of Real Analysis 101 :)



If a non-programmer asks you "Why do you guys say int x = 10; float y = 0.2 Why not x is an whole number who value is 10 y is a fraction whose value is one fifth.

Knowing what language we're working in is enough to know that's exactly what that code says. This is not the case in mathematical notation. C × A could be the direct product of two groups, or maybe the tensor product of two graphs, or perhaps the product of two lattices, or it could be one of who knows how many other things. The issue here is not high precision, but the opposite: heavy reliance on convention and context in order to be unambiguous.


Knowing what language we're working in is enough to know that's exactly what that _math_ says, too.

You don't expect to understand Java after learning PHP, and you don't expect to understand a Topology paper after learning Algebra.

"C x A" is a combination of C and A. C, x, and A are defined once somewhere at the beginning of the paper/book.


"C x A" is a combination of C and A. C, x, and A are defined once somewhere at the beginning of the paper/book.

Should I take this (which doesn't actually hold in the general case) to mean that every document uses its own language? That isn't exactly good for readability. We certainly don't consider every software project to be written in its own language (and no, the semantics of C does not tell us what the dP function does).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: