I mean, yeah, some absurd fraction of NYC (25%? 50%?) got it before we knew how to treat it.
But it's less than a tenth of the total population loss, so it's probably not worth trying to figure out either if the actual toll was higher than the official death toll (maybe by a factor of two?), or if some in the first wave would have died by now of other causes.
For context, there are only around 20 humans above 3200 rating in the world. During the contest, there were only 21 successful submissions from 25k participants for that problem.
It doesn't code like human so you would expect it to be better at some kinds of tasks. It brute forces the problems by generating a million solutions and then tries to trim that down, a few problems might be vulnerable to that style of approach.
Are you sure? "brute forces the problems by generating a million solutions and then tries to trim that down" isn't how I would describe the way a LLM works.
The original AlphaCode paper in Nature explains the approach, they generate many potential solutions with the LLM and do a lot of processing after to select candidates. Here's where the probabilistic nature of LLMs hurts, I think.
When people talk about existential risk fears, this is without a doubt my fear. All it would take is a small group that is ethnically supremacist and technically capable. How many edge-lords joke that the problem with the world is that there are too many people? Imagine you could drastically reduce overall population while maintaining your own ethnicity. And it could be done slowly, a harsh flu now and again that affects the general population at a much higher percentage while sparing your own kin. It's terrifying to think this could be justified in the minds of some.
Though that's a chilling thought, I am more worried about someone trying to produce such a bioweapon, but just doing an amateur job and killing us all with their buggy alpha release.
Including in 2022 with the self-created programming language the post is about, which is just amazing, and lives somewhere in my head near FlaSh's 2020 decision to switch to playing pro StarCraft Brood War tournaments as the Random race -- requiring him to become world-class at three races (nine race matchups) while his opponents only have to be world-class at one race (three race matchups). FlaSh came third in the largest tournament that year.
From the post:
> I think I predicted that requiring myself to use only Noulith on Advent of Code would make my median leaderboard performance better but my worst-case and average performances significantly worse. I don’t think my median performance improved, but my worst-case performance definitely got worse. Somehow it still didn’t matter and I placed top of the leaderboard anyway. (I will note that 2021’s second to fourth place all didn’t do 2022.)
It seems crazy that 2nd to 4th in 2021 didn't do 2022 at all! It's an annual ritual for me, I couldn't imagine being so heavily into it one year and then not competing at all the next. Was there a reason?
I heard something about a large competitive programming tournament (i.e. a commercial one) happening at a nearby time to AoC, I think it was that for at least one person.
(I also don't think it's unimaginable; lives change, everyone's going to have a point where they played one year and not the next, most obviously illness, but also life changes like marriage, kids, stressful new job, etc?)
All it takes is someone convincing you to take a travel vacation around Christmas so that you are too busy to be competitive. (And maybe you don’t bother to play if you can’t be competitive at it.) Nothing LIVES CHANGE sized has to be the case...
PyPy is pretty well stress-tested by the competitive programming community.
https://codeforces.com/contests has around 20-30k participants per contest, with contests happening roughly twice a week. I would say around 10% of them use python, with the vast majority choosing pypy over cpython.
I would guesstimate at least 100k lines of pypy is written per week just from these contests. This covers virtually every textbook algorithm you can think of and were automatically graded for correctness/speed/memory. Note that there's no special time multiplier for choosing a slower language, so if you're not within 2x the speed of the equivalent C++, your solution won't pass! (hence the popularity of pypy over cpython)
New edit from that previous comment: there's now a Legendary Grandmaster (ELO rating > 3000, ranking 33 out of hundreds of thousands) who almost exclusively use pypy: https://codeforces.com/submissions/conqueror_of_tourist
If you want more examples of real world use cases, PyPy is pretty stress-tested by the competitive programming community already.
https://codeforces.com/contests has around 20-30k participants per contest, with contests happening roughly twice a week. I would say around 10% of them use python, with the vast majority choosing pypy over cpython.
I would guesstimate at least 100k lines of pypy is written per week just from these contests. This covers virtually every textbook algorithm you can think of and were automatically graded for correctness/speed/memory. Note that there's no special time multiplier for choosing a slower language, so if you're not within 2x the speed of the equivalent C++, your solution won't pass! (hence the popularity of pypy over cpython)
PyPy is easily 10x faster than CPython at numeric stuff, which is 99% of these contest problems.
For example, using CPython, if you try to make an array of a million ints, you won't get `int[1000000]` in your memory layout. Each int would actually be an object, which is huge and inefficient to reference (they are something like 24+ bytes each).
PyPy on the other hand, works as expected.
I think the more important point is that PyPy when written like C code, can actually get within 2x of the performance of C code. If it's any slower, python won't be a viable language in competitive programming at all.
(CPython is sometimes still used on other platforms like atcoder.jp, but only because they allow third party libraries like numba and numpy which can fill the same role pypy does)
Right now, other than a handful of people who figured out how to make numba's jit work, only pypy is viable for competitive programming. I wonder if you can do better than pypy?
There are also a few red coders on codeforces.com who mostly use pypy (cpython is completely unviable there because numpy and numba is not installed)
While we could certainly go in this direction, we're not planning to, because in our experience optimizations for different workloads are largely distinct, and this use case is already handled well by PyPy.
Isn't this use case the scientific computing use case? That's a fairly large part of the ecosystem to give up on!
I think it's still a relatively low effort way (just need to write a scraper) to create a benchmark on a diverse set of algorithmic tasks that have clearcut criteria on AC/TLE/WA. PyPy is often 10x faster than cpython on these problems (and just 2x slower than equivalent C++ solution) so it will be a much nicer headline too if you can achieve similar performances!
Though I can also see how it can be completely irrelevant for server workloads. Pypy's unicode is so slow, some people on codeforces still use pypy2 over pypy3 just to avoid it. And c extensions is so bad on pypy, you can often get better performance on cpython if you need to use numpy.
This is just a comment on my personal use of Python for competitive programming: I've never used numpy for competitive programming or thought that it would be a good tool for that. PyPy seems like a great solution for the highly-numerical algorithms that these contests tend to lead to.
So I would not call this "scientific computing". Personally I consider competitive programming to be it's own use case.
And as much as we want to improve scientific computing in Python, it's very hard since the work is done in C. Our current hope is to help mixed workloads, such as doing a decent amount of data-preprocessing in Python before handing off to C code.