Hacker Newsnew | past | comments | ask | show | jobs | submit | morenoh149's favoriteslogin

I've found the Mode Analytics course to be quite useful : https://community.modeanalytics.com/sql/tutorial/introductio...

The HackerRank SQL challenges were also helpful in getting some extra practice: https://www.hackerrank.com/domains/sql/

Finally, this Quora post will also point you to some useful resources and has some great tips that I'm working through now: https://www.quora.com/How-do-I-learn-SQL


This is specific to my codebase I was interviewing candidates for (heavy OO). Not all of these concepts are necessary for the candidate to understand, but a few go a long way towards a great developer experience when applied correctly:

- SOLID principles

- The Expression Problem

- DRY

- Dependency Injection

- Composition vs Inheritance vs Delegation

- Connascence

- Law of Demeter

- Invariance, Contravariance, and Covariance


For Firefox there is (was?) IETab which lets you open websites in IE tabs inside Firefox. I think there was even an option to specify that certain websites (e.g. the Intranet, certain banks) should always open in IETab.

Haven't used it for years but back when I was a Windows sysadmin it was the final proof that FF was better than IE: in addition to being Firefox it could also be IE : )


As good as that is, see this: http://dlmf.nist.gov/, the online companion to the truly epic NIST Handbook of Mathematical Functions, itself the modern successor to Abramowitz and Stegun.

First of all, let me say that what you all have put together is truly excellent. It covers the sort of things toy treatments and papers leave out or gloss over if you actually want to get something working in the real world.

That said, I strongly disagree with your disagreement. There was a recent paper whose abstract I read that made me think of homotopies between convolutional networks. Unfortunately I lost the paper behind some stream and never got to read it proper. In the context of that, I realized that the search for convnet design will likely soon be highly automatable, obsoleting much of the work that many DLers are doing now.

What will be future proof is understanding information theory so that loss functions become less magical. Information theory is needed to understand what aspects of the current approaches to Reinforcement learning are likely dead ends (typically to do with getting a good exploration strategy, also related to creativity). Concentration of measure is vital to understanding so many properties we find in optimization, dimensionality reduction and learning. Understanding learning stability and ideal properties of a learner/convergence means being comfortable with concepts like Jacobians and semi-positive definiteness for a start.

Probability theory is needed for the newer variational methods, whether in the context of autoencoders or a library like Edward (whose like I think is the future). Functionals and the variational calculus is becoming more important, in both deep learning and for understanding the brain. There's lots of work in game theory of dynamical systems (think evolutionary game theory) that can help contextualize GANs as a special case of a broader category of strategies.

Much to the contrary, the topics I mentioned are both the future of deep learning and future proof in general. This blog post by Ferenc captures my sentiment on the matter: http://www.inference.vc/deep-learning-is-easy/


For those who work inside Google, it's well worth it to look at Jeff & Sanjay's commit history and code review dashboard. They aren't actually all that much more productive in terms of code written than a decent SWE3 who knows his codebase.

The reason they have a reputation as rockstars is that they can apply this productivity to things that really matter; they're able to pick out the really important parts of the problem and then focus their efforts there, so that the end result ends up being much more impactful than what the SWE3 wrote. The SWE3 may spend his time writing a bunch of unit tests that catch bugs that wouldn't really have happened anyway, or migrating from one system to another that isn't really a large improvement, or going down an architectural dead end that'll just have to be rewritten later. Jeff or Sanjay (or any of the other folks operating at that level) will spend their time running a proposed API by clients to ensure it meets their needs, or measuring the performance of subsystems so they fully understand their building blocks, or mentally simulating the operation of the system before building it so they rapidly test out alternatives. They don't actually write more code than a junior developer (oftentimes, they write less), but the code they do write gives them more information, which makes them ensure that they write the right code.

I feel like this point needs to be stressed a whole lot more than it is, as there's a whole mythology that's grown up around 10x developers that's not all that helpful. In particular, people need to realize that these developers rapidly become 1x developers (or worse) if you don't let them make their own architectural choices - the reason they're excellent in the first place is because they know how to determine if certain work is going to be useless and avoid doing it in the first place. If you dictate that they do it anyway, they're going to be just as slow as any other developer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: