Hacker Newsnew | past | comments | ask | show | jobs | submit | tps5's commentslogin

People always say that math students don't learn about applications of math.

This was never the case for me. When I learned trig ratios, I always understood some basic things that trig ratios could be used for. The teacher always introduced some applications, we always had a lot of word problems, and I could fill in the gaps myself.

Same for calculus. When I learned calculus, I always understood some things that calculus could be used for.

So I understood how those things could be applied to general, everyday sorts of problems. What was missing, though, was that I had nothing to which I could apply those techniques, besides homework.

Learning math (and reading STEM papers) has become easier for me since I now have actual problems to solve. Don't get me wrong: I'm not solving particularly challenging problems or using particularly advanced math. Nothing that tens of thousands of people haven't done before me. But I do need to understand the problems, solutions, and some of the context in order to successfully implement them. This provides a motivation that was always missing before.

I suspect this general narrative is true for a lot of people: that having an actual problem to solve is almost necessary to get a student to really learn the material, instead of just coasting along for a grade.


High school trigonometry and introductory differential & integral calculus are not the kind of books being described in this discussion.

The example in the original post is books about group theory (or the group theory sections of abstract algebra books more generally). I can attest that this subject is very rarely described in textbooks with clear examples shown before definitions and theorems; usually the presentation is entirely abstract, following a pure definition–theorem–proof kind of structure. But many other areas of pure mathematics at the undergraduate level and above are presented in a similar fashion.

(I recommend Nathan Carter’s book Visual Group Theory for a lovely counter-example to the prevailing trend, which starts with the concrete, and is very accessible. http://web.bentley.edu/empl/c/ncarter/vgt/)


We used to "run through" books like that. Their reasoning was to prove a theorem you only need the definitions/axioms. They really wanted us to be able to grasp the truths of a logical system from just its theorems and definitions. It was horrifically difficult. (Not all professors taught that way there.)

I feel that a lot of blame lies at modern academias curriculums. They feel every student needs to graduate in X number of years with a pretty long list of courses. It leaves little time for students who need or want more time with topics.


A lot of trig and geometry never really clicked for me until I had to use them in shop class. For instance, planning out the dimensions and cuts that you need to make in a rafter to get the desired roof pitch for a shed of certain dimensions. Or laying out the stringers for a set of stairs.


You're entitled to your opinion, but posting that thesis and saying it's "more truthful" is tremendously silly.

You should be able to understand that it's a hypothesis you find convincing without evangelizing it as "truthful."


More: You should recognize that it's a hypothesis that many (most?) historians do not find convincing. (Which does not mean that CriticalSection's hypothesis is any more convincing; they both sound like revisionist history to me, just written by apologists for different monsters.)


I don't see any evidence that this is a "facial recognition system."

It's likely hard to legislate against software that attempts to detect if there is a person, what their expression is, and guesses at their gender.

You could imagine that job being done by a person (just noting how many people stopped at the advertisement, and what their expression was). I don't think there's really a way to make that illegal.

I suppose I think it's something that people should be aware of, though.


On some level it's hard to believe that people say things like this, especially out loud and with women present.

(But I do believe it)


I don't understand this attitude. Atom's look/feel/functionality/default keybinds are based on sublime text. Sublime text is a lot faster/more reliable. Atom has a large community building addons for it.

Take your pick. I used atom until I got annoyed by how it choked on large files and crashed semi-frequently.


There's nothing wrong with using multiple editors for different tasks. I'm currently using vscode after a stint with atom for the majority of my editing. But I use sublime text for scratch space or quickly opening large files or one off text files.


I feel the same way. I wanted to support the Atom project, but Sublime is super stable and has a ton of useful plugins.


> I also don't know how an ISP would get your actual internet history if the website uses HTTPS.

Your ISP can (and likely does) monitor your DNS queries, which (as far as I know) are not encrypted.

Personally I think the net neutrality stuff is a tad overblown. I'd vote for maintaining it, but I've never been particularly convinced by the whole "surveillance state/beyond-orwellian/ISP censoring your speech" arguments that get thrown around on HN, among other places.

I think the problems with ISPs are more practical: they overcharge, provide shitty service, have no incentive to upgrade their infrastructure, and clearly collude with one another. Therefore they need to be regulated.


> I think the problems with ISPs are more practical: they overcharge, provide shitty service, have no incentive to upgrade their infrastructure, and clearly collude with one another. Therefore they need to be regulated.

Agreed. Though I would prefer that we do whatever we can to identify and implement mechanisms to increase competition. I want new ISP options, and several of them, rather than just marginally better behavior from the one or two ISPs I have in my neighborhood. I'd prefer regulation that increases competition (even if that hurts the incumbents) rather than regulation that assumes the incumbents are fixed and therefore just manages how they conduct their business. The prior is designed to create new ISP options, the latter tends to serve to decrease the incidence rate of new options.

I've always been a voracious Internet consumer. For all of its faults, I really enjoyed the regulatory framework of the Communications Act of 1996 that allowed competitive ISPs to lease physical wires.


How about forcing ISPs to lease the last mile to help bolster competition, they did something similar in the UK [1]. Not quite sure how that worked out for them.

[1] https://en.wikipedia.org/wiki/Local-loop_unbundling#United_K...


This was a requirement of EU law, and is presumably a reason why there are more ISPs in the EU than in the US (apart from the density issue, of course).


Yes, I think that would be great!


> > I also don't know how an ISP would get your actual internet history if the website uses HTTPS.

> Your ISP can (and likely does) monitor your DNS queries, which (as far as I know) are not encrypted.

HTTPS does expose the domain name in plain-text through SNI. Yes, DNS is not encrypted.


> I don't think its any coincidence that TV and movies show the murderous and thieving side of piracy as to condition us to be appalled by them, in general.

Is this really true? There's a long history of sympathetic portrayals of drug culture, from Trainspotting (novel and movie) to A Scanner Darkly (novel and movie) to Jesus' Son (novel and movie) to Pineapple Express (mainstream movie), and a whole lot more.


Trainspotting's characters may be sympathetic, but it certainly doesn't portray drugs in a positive light. (Though of course heroin is a far cry from weed.)


I'm not sure A Scanner Darkly portrays drugs in a friendly light. The main characters are taken straight out of a stoner comedy but they're set against a backdrop of a country that's descended into rampant drug abuse, gang violence and poverty. Not only that, but without wanting to spoil the film, the ending doesn't paint a pretty picture of long-term use.


This seems right to me.

the analogy I'd use is driving a car versus sitting in a passenger seat of a car. When I'm a passenger, I never remember the route we took. I retain almost no information about lefts and rights and landmarks. But if I'm driving, I really only have to drive to a place once to remember the way.


It seems like automation and magic keeps on eating away at the driving experience, most cars are sold with automatic shifting, power steering, and a host of other features that most users are blissfully unaware of, the final outcome being self driving vehicles.


Not sure if insightful or just taking the metaphor too far.


Your argument sounds a lot like "criminals will always find a way, no point in gun control." The thing is, that argument can be applied to all laws. Why even have laws? Criminals will always find a way.

Look, I agree that anonymity is important. I agree that the world is better if some things are kept absolutely private, from everyone, to whatever extent is possible. But let's not pretend that there aren't trade-offs.


Of course there are tradeoffs. But the price of a free society is that some people will abuse these freedoms.

I wish these values were more popular.


> "criminals will always find a way, no point in gun control"

You could partially refute this argument by claiming that gun control is hard to ignore or bypass (although not that hard, speaking as someone who knows a fair amount about fabrication and guns).

On the other hand, this argument ported over to Tor doesn't make any sense; if Tor intentionally cripples its functionality, criminals will move over to non-crippled solutions like I2P. The best you can hope for is to mildly and temporarily inconvenience criminals while really hurting innocent people who need Tor.


Do you have any references to recommend on this subject?


I've been out of it for a while, but the wikipedia page on z-order is probably a good place to start. https://en.wikipedia.org/wiki/Z-order_curve. This discusses the basic idea of using a space-filling curve to map data to 1-d, and then using a conventional search structure. Hilbert curves should preserve proximity better.

Sorry to toot my own horn, but I wrote some papers that should be useful to someone learning the topic:

1) Algorithms based on space-filling curves for spatial join, a generalization of range search, points in a polygon, etc.: Orenstein & Manola, IEEE Transactions on Software Engineering, Volume 14.

2) How to tune the index for non-point spatial objects: Orenstein, ACM SIGMOD 1989.

You might also find my software, which implements these ideas, useful: https://github.com/geophile

There have been many papers elaborating on the idea, but I haven't kept up with the area. Given that you can get such good results using space-filling curves and standard data structures, I think it's nuts to do anything else. I mean look at Postgres and MySQL. The spatial indexes in these systems have never been fully integrated because they are distinct index structures, always lagging behind progress with the main index structures.


In postgres not much has happened around geospatial in core, because there's a lot happening in postgis.

FWIW, you can define additional types of index access methods in postgres (from scratch), or you can gist / gin / spgist access methods where you need to care about a lot less. All are used in the wild.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: