On a bit unrelated note, if you want to handle very large Bloom filters (billions of entries with low false positive rates) there is an open source Java library that can help you to do that: https://github.com/nixer-io/nixer-spring-plugin/tree/master/....
It is possible to run everything on CPU, even if we fastai 1.0. Only training can be 100 times slower than on GPU. Even for a toy exercises involving image processing and actual deep networks (30-150 layers) it means hours or days of training.
My actual use case some time ago was to run tests suites for some tricky and convoluted data wrangling code that used (by necessity since it was doing some funky tiled image loading and segmentation) fastai dataframes and stuff, locally on CPU to debug those tests... neither training, nor even inference actually, just running a useless micro-training session at the end to sanity test that data was loaded in a usable way and things didn't broke when you tried to train.
But in fastai 1.0 it was all bundled together in one big yarn, with everything depending in the end on some data loading classes that depended on GPU driver etc.
Anyway, it was really bad architecture and dev practices in the codebase I was working though, the tested behavior would probably not have matched production one 100%... I don't blame fastai much for not helping with a broken workflow, but I prefer more barebones and less opinionated frameworks, aka using tf or pytorch directly, since some times you really need to get that "broken" thing running in production before you work on a refactored version of it :P Fastai seems very research-oriented and opinionated.
The second part of the course (https://course.fast.ai/part2) - builds stuff bottom-up, starting from matrix multiplication all the way up to ResNets. This is a great resource even if you want only use Pytorch.
Technically in Java it is possible to create an exception object once, and throw it multiple times, making throws much cheaper as the stack trace is filled during the object's construction only.
According my unscientific tests, it is about over 100 times faster. On my laptop I can throw one million exceptions in 15ms millisecond, when reusing the same exception objects, compared to 2 seconds when creating a new exception object every time with a quite shallow stack.
When a new exception object is created, the slowest operation is filling information about the stack trace. It is possible to override Exception's fillInStackTrace() with an empty implementation. In this case throwing exceptions with new exception objects, is only slightly slower than using one exception object (17ms vs 15ms for 1M throws).
Deeper stacks make difference even bigger. Adding 100 nested invocations to the stack slow downs the classic approach (a new exception object created just before invocation) to 9 seconds, while alternative approaches are not affected.
That is not correct. 15 is total maximum number of repeats including the first one. Even the diagram on https://ihateregex.io/expr/username correctly says that loop can be taken between 2 and 14 times.
Apparently Amazon initially paid authors by e-book downloaded by users, but some authors abused this model. E.g. by splitting a 300-pages book to 6 50-pages books. It also enabled plain fraud, where fraudsters created accounts to download large numbers of books from Kindle Unlimited paid by unscrupulous authors.
It will be very exciting to learn how life on Mars is different or similar to life on Earth. It would teach us what are might be fundamental blocks of life, and what is optional or can be "implemented" differently. It might bring us closer to understand beginnings of life.
Even if we find that life on Mars has common roots with life on Earth, it will be gives us new perspective on early life forms plus we will learn that lifeforms could survive interplanetary trips without special protections.
This competition allowed submissions to include extra data files that can be used by the model. The cheaters added a file with data from another website that seemed innocent, but secretly encoded extra information (perfect answers) in IDs. For 10% of predictions, the code via a set of obfuscated operation retrieved this information and presented it as the answer.