Landau's problems are pretty simple in their statement. I believe that Goldbach's conjecture is the oldest, dating to 1742. So I wouldn't exactly call it approachable in the sense of easy to solve, but the statement is quite simple. The full list of Laundau's problems, from the Wikipedia page ( https://en.wikipedia.org/wiki/Landau's_problems ), is:
1. Goldbach's conjecture: Can every even integer greater than 2 be written as the sum of two primes?
2. Twin prime conjecture: Are there infinitely many primes p such that p + 2 is prime?
3. Legendre's conjecture: Does there always exist at least one prime between consecutive perfect squares?
4. Are there infinitely many primes p such that p − 1 is a perfect square? In other words: Are there infinitely many primes of the form n^2 + 1?
I don't think any of them has a million dollar prize, but tenure at a decent university seems like a fairly reasonable expectation for solving one of these.
I have heard Stroustrup speak and he mentioned multiple times how important it is to maintain compatibility with all existing code in between language versions. Say what you want about C++, but I think they have the right approach. If you are the steward of such a fundamental project, it is definitely in your users' (and therefore your project's) best interest to keep everything that exists working. It's just not worth the pain to make trivial changes in the name of aesthetics or "usability" that break every single hello world program (making print a function instead of a statement) or to change how the division operator works for your millions of existing users to save some initial confusion for hypothetical potential users.
C++ may have become a bit unwieldy as a result of the combination of this policy and the focus on adding new features, but developers working on existing code bases can continue to develop happily ignoring as much of C++11 as they like without the concern that the plug will be pulled in their platform some time soon. As they decide to adopt the new features, they can, based in the merits of the feature, not because they are forced to by the platform's developers.
There are starting to be some compelling features in py3k, but none of those required the massive language breakage that has happened between Python 2.x and these releases. All of the compelling features could have been added piecemeal through the normal backward-compatible deprecation and release process. And the devs who were so bothered by print and exec being a function, or other stupid things could continue to complain in the mailing list while the rest of us get our work done.
I tried very hard to be in the python3 camp. I'm among those who found the changes to be mostly good ones, and I've talked myself into biting the bullet a couple of different times, but it always turns out the same.
Last month I gave up and reverted the project I'm working on now back to python2 after spending an afternoon looking for unofficial ports of a large library only to find that someone on Github had forked it, done the hard work of fixing 2200 broken bits, and had his pull request ignored without comment for a full year.
Apparently it's sort of unmaintained in general, but I was trying to use it for something that I wasn't happy with scikit-learn for, and at the time, I wasn't aware of how long it had been since it had been updated in general.
So in hindsight, it wasn't the best example, but I had already had to deal with tracking down an random person's networkx branch as well, so that was enough to make me finally just fix the few things needed to work in 2.7.
I think you'll have an increasingly hard time finding good examples. Most packages that aren't ready for Python 3 by now are just unmaintained. At best they're so bogged down in their own complexity that you should be wary of using them for anything new.
Sometimes you need to use unmaintained or legacy code, and that sucks. But there are lots of programmers who don't have such a burden, and they shouldn't be discouraged from Python 3.
So it is. I'm not sure why I couldn't find that six months ago while I was looking.
Also, I don't mean to imply that people should be discouraged from using Python 3. Like I said, I want to use it myself, and would be if I hadn't ran into problems.
Actually, under the GPL, Red Hat is only obligated to make sources available to its customers. What they have done is make the sources available to everyone on the Internet for free [1]. So CentOS would have to pay for RHEL were it not for Red Hat's openness. Probably not a big deal. However, Red Hat is under no obligation to make its non-GPL packages (e.g., python, ruby, apache, postgresql, ssh, etc.) available to anyone in source form, including their customers. These, too, are available free of charge to the general public. Finally, Red Hat is under no obligation to make any of the source of their own internally-developed projects (e.g., package management, OS installer, and all of the other projects that differentiate the distribution from a software perspective) available under an open source license, but they do (admittedly, this was not always the case). Finally, Red Hat employs many developers who work full time on critical projects (kernel, gcc, gnome, etc.). They are pretty model open source citizens whose business model is not to use open source as a gateway to their own proprietary products like IBM and Oracle.
If they wanted to shut down CentOS, it would be very easy to stop distributing the source of their own projects and of permissive license packages. Hopefully sponsoring CentOS is not just a play to exert influence on the project and retard its progress, but I am willing to give Red Hat the benefit of the doubt here.
> Actually, under the GPL, Red Hat is only obligated to make sources available to its customers.
That is true, but the GPL allows their customers to freely distribute the GPL source they receive. Not saying Red Hat doesn't help or isn't doing good here, but it's not quite as altruistic as you make it out to be. I thank the GPL for that.
Note that I'm a huge fan of Red Hat and preferred their distribution since the RH 4 days, I don't want to denigrate Red Hat in any way. Also a huge fan of CentOS as well.
You must also add the condition that your scalars are real numbers. There are many finite dimensional vector spaces that are nothing like Euclidean space: any finite extension of a finite field, for example. This comes from Galois theory, but is not just abstract nonsense. One application to computing is if your scalars are only 0 or 1 and all operations are done mod 2, then you have a framework for doing error correcting codes, among other things. We call this set of scalars either the finite field of size 2 or the Galois field of size 2, aka GF(2).
Yes, what I should have said is: if k is an algebraically closed field then n-dimensional vector spaces over k are isomorphic to k^n. Unless I've had too much to drink this evening.
I'm going through the Coursera machine learning class right now and I have to say that the professor glosses over several details and often makes comments like "if you're not familiar with calculus..." and "if you're not familiar with statistics..." which caught me off guard at first. I really doubt that actual Stanford students enrolled in a machine learning course would be lost on the incredibly basic operations (e.g., taking the partial derivative of a polynomial function) he is using.
Also, there has been no acknowledgement of how contrived the exercises are. For instance: exercise one gives a data set of a the profitability of a company's existing stores versus the population size of the city in which the store is located (in units of 10,000 dollars and people, respectively). The range of the data is 5-23 (population), with most of it concentrated below 10. We fit a straight line to the data using least squares, then use that line to predict the profitability of two new locations--in cities of populations 35 and 75. I understand that this is an intro course, but there is not a word about how ridiculous this is.
I don't mean to be overly negative. I am enjoying the course, but I am surprised a bit by how basic it is. Let me say that I do like the approach of the course to ML, which is to formulate a parameterized cost function and then minimize it by some general method, rather than the typical statistics course approach which is to solve ordinary least squares directly, which gives an "exact solution" (given the data) but does not generalize to more general models.
I know this is foundational material and overall, I am impressed by the approach of the course, but I would expect more comments on the weakness of the naïve methods we are employing at this early stage and how they will eventually be improved. I find it very helpful when professors at least reference more advanced methods or provide references for further reading by the interested student. Admittedly, that is more frequently a feature of graduate courses, but encouraging students to go beyond the material is an important aspect of good teaching. I have watched the videos for several other online courses and I do appreciate the fact that Coursera is allowing me to hand in assignments for grading, which vastly increases by engagement with the material. This, in fact, is the most valuable resource offered by the program. The lectures themselves are fine--if a bit dry--but a good book or a set of well-prepared notes (not slides) would probably suffice just as well if accompanied by the assignment grader.
All in all, this is great. The more people who know about machine learning (and have access to higher education in general), the better.
I'm pretty rusty on my math, so I guess it happens to hit the sweet spot for me at the moment. Once I get back up to speed on calculus I might feel different about it.
IIRC from a Stanford student's comment on HN, Stanford offers two versions of machine learning, one that is more math focused and a more applied one designed for all majors. The ML course offered through Udacity is the latter one.
It is basically correct. They had ongoing revenue from sales, but also had ongoing costs related to the producing those sales, in addition to overhead from salaries, facilities, etc. What you would need to look at is their profits, because that is what adds to capital. Apple was actually losing money at the time (lost $800 million in 1996 and $1 billion in 1997)[1], so the problem may have been worse than the simplistic "three months away from bankruptcy." It is difficult to tell for sure without doing a far more in-depth analysis, but having 90 days working capital at an unprofitable company makes the "three months away from bankruptcy" assertion quite reasonable.
If you were to offer an self-hosted version that I could install in my own datacenter, that'd be of serious interest to me (my company). Allowing anyone else to hosting the sort of documents that end up in latex (very sensitive research papers) is simply not an option, but collaboration among researchers would be very convenient on a platform like this if we could control it. Sure, lots of latex is university or public research that ends up getting published anyway, so security is not a concern. A lot of it, however, is industrial research labs where information security is of paramount importance. This seems to be an overlooked market among many people who just want to make web programs. I urge you to consider the possibility that you have many potential customers who are not interested in letting you host their data.
Very cool site, though. Google docs for latex is a very useful offering. Thanks!
1. Goldbach's conjecture: Can every even integer greater than 2 be written as the sum of two primes?
2. Twin prime conjecture: Are there infinitely many primes p such that p + 2 is prime?
3. Legendre's conjecture: Does there always exist at least one prime between consecutive perfect squares?
4. Are there infinitely many primes p such that p − 1 is a perfect square? In other words: Are there infinitely many primes of the form n^2 + 1?
I don't think any of them has a million dollar prize, but tenure at a decent university seems like a fairly reasonable expectation for solving one of these.