Thanks! Still working on it, but it will embed the ELMo model in the .magnitude file, wrap the ELMo model interface code so that it is standardized with Magnitude's features (querying, concatenating, iterating, etc.) and will use ELMo's OOV method natively.
fast.ai is probably the best MOOC I've ever followed. As its name says, it is "for coders" and 100% applied. It is perfect for getting started, quickly.
Please read this blog post, in particular regarding the comment on "Hacker News contributors regularly give such awful advice on machine learning".
I don't think (some) people understand; a slick data annotation tool like this is vastly more useful than the 20th variant of GAN that DeepMind produces :)
Totally, I think people have this weird sense of entitlement when it comes to high-quality datasets without the commensurate respect for how they're created or the level of effort that goes into them.
Fei-Fei Li gives a good sense for this in her history of ImageNet [1][2].
WriteLab | ML Engineer | Berkeley, CA | ONSITE, SALARY: 100K-130K
We at WriteLab (writelab.com) are building ML tools to give immediate writing feedback for students and English language learners. There is plenty of room to impact the product by designing and implementing new features, usually starting with data collection. We use all the good stuff in deep learning and NLP including: SpaCy, scikit-learn, TensorFlow, etc.
Strong background in machine learning and experience deploying ML models in production is a must. NLP and DL experience is a strong plus.
Interview process:
initial video call with NLP engineer
onsite interview to discuss previous experience and go through an NLP / ML problem
lunch with CEO
How many people do you have on your ml team so far? I'm doing ML research (NLP for determining writing quality and similarity, amusingly) for my company and it's getting a bit lonely.
We have
- 1 ML/NLP engineer (me)
- 1 CEO w/ a linguistics background
- 1 founder is an English professor at Berkeley
- 2 with previous experience teaching English
- 1 Berkeley PhD in Deep Learning / NLP advising us
- 1 Berkeley PhD in English helping us categorize writing issues
We're working on assessing writing quality too. Get in touch!
Hi jo_
I'd love to talk with you about Alexa's NLP and ML research team in Cambridge, MA ebbounty@amazon.com send me a note! We have a robust team of senior and principal engineers and scientists to learn from
I think spaCy uses perceptrons (essentially a shallow neural network) so it should be faster. Accuracy is pretty similar with SyntaxNet at least on the training data but I'm guessing SyntaxNet works better on long range dependencies.
The current update uses the linear model. I've also been working on neural network models, and more generally, better integration into deep learning workflows. That'll be the 2.0 release.
I've learned a lot while doing the neural network models, though. The 1.7 model takes advantage of this by having a more sophisticated optimizer. Specifically, I use an online L1 penalty and the Adam optimizer with averaged parameters. The L1 penalty allows control of size/accuracy trade-off.
This means we're finally shipping a small model: 50mb in total, compared to the current 1gb. The small model makes about 15-20% more errors.
I don't understand the math completely but it looks like dropout can be derived from a Gaussian prior (approximating the Bernoulli) in a Bayesian context.
One useful tidbit is that you can get prediction intervals from deep learning models by running it forward N times with dropout and take the mean and variance of that distribution (plus another precision term).
I actually like the keyboard. I tend to glide over the keyboard to find the right keys, minimizing wrist movement. Combination of feel, sound, flatness are somehow very satisfying.
But! The butterfly keys are stiffer for smaller keys so ironically(?), the Fn keys and arrow keys are hard to find and press. I actually use the arrow keys so I find using the Ctrl shortcuts more often for moving the cursor around.
I feel like a touch screen on the trackpad would've made more sense. Since it's so large there's room to be creative with on-screen shortcuts, dragging sliders, choosing an emoji :), etc. You can still keep all the keyboard shortcuts you need and not need to look at 3 things at once (screen, keyboard/trackpad, touch bar).
How would ELMo work if a neural network needs to be run?