I've seen this argument elsewhere so I think I know what he means. Basically the idea is you throw money around too easily and companies that would otherwise die (due to poor management/inability to compete) stay around. Too much subsidy yields poor incentive to compete yields fewer breakthroughs and slower technological progress.
All a publication indicates is that a white/grey hat researcher has discovered the vulnerability. There is no way to know if or how many times the same flaw has been exploited by less scrupulous parties in the interim.
And information leak exploits are less likely to be detected than arbitrary code execution. If somebody is exploiting a buffer overflow, they need to get it exactly right, or they'll probably crash the process, which can be logged and noticed. The only sign of somebody attempting Downfall or similar attacks is increased CPU use, which has many benign causes.
Since it is in a class of other well known vulnerabilities, I'm going to assume that there has been quite a bit of active research by state-operated and state-sponsored labs. I think it's more likely than not that this has been exploited.
Earlier this week I was reading about Hassan-i Sabbah and stumbled upon an interesting, relevant factoid.
Hassan-i Sabbah's militant group was known for taking drugs (hashish) as well as the targeted murdering (assassinations) of key figures during the Crusades. So they were referred to as hashishins (arabic for hashish-smokers) which was eventually turned into assassins and thats where the term comes from.
As explained in the etymology section of the dedicated Wikipedia page, the name of the sect was "Asāsiyyūn" (Meaning something like "men of principles). The name Hashishin (Hashish smokers) was a derogative misnomer by their enemies.
The whole story and legends related to it captured the imagination of modern scholars with an orientalist biais.
Among today scholars the question is settled: while the sect and assassinations are historical facts, they are surrounded by lots of myths.
Before people get too ahead of themselves - the CAR-T (Chimeric Antigen Receptor T-cell) technology used here is super hot right now and works by reprogramming the immune cells (via genetically engineering a new recognition domain into them) so that they recognize and kill cancer cells. All great, the problem is that most cancers don't have a single "hey look im cancer" biomarker - except blood cancers. This therapy works because the cancer they're targeting is immune cell cancer and immune cells DO have very specific receptors on their surface that allow this therapy to be extremely targeted (and extremely effective).
Not to discount the work which is amazing and is surely great news for sufferers of myeloma, but when you see mention of technology like this don't get TOO excited that we're on the cusp of a cure for cancer UNTIL you see it happening in various solid tumors.
As a naive layperson, we can’t be very far from getting a sample of cancer from a person, doing a diff against a regular cell and then creating a specific vector that holds whatever can kill those particular cells. How far are we really?
1. To be definitely safe, you can't just sample one healthy cell, you have to sample lots and from the many different types and subtypes everywhere in your body. Imagine training a swarm of robots to attack anybody wearing a mask and wielding a knife... and then they enter a hospital surgical ward.
2. Sometimes the problem with cancers is that they are behaviorally wrong in ways that aren't easily expressed at the cell-boundary. (Your body has MHC-1 proteins that try to offer a debug-window into what the cell has been doing, but that's its own complicated topic.)
That's the right approach. But, as the other commenter pointed out, there are a number of complications which mostly center on the inability to distinguish cancer from self. Many of the mutations that make cells cancerous are not expressed at the cell surface, and the only way to probe the interior is through MHC molecules which act as little windows into the cell. More specifically, during normal metabolism the cell will chop up the proteins its making and display them on the surface for immune surveillance. The immune system can use these because its been trained against EVERYTHING SELF (in the thymus) so when something goes wrong in a cell and surface expression looks different it can usually recognize this.
Of course by the time cancer has developed something has gone wrong in this process normal process in any number of ways.
Even if the challenge was just "recognize MHC-presented cancer antigen" in therapy you have to a) determine what section of the mutated protein would be displayed in MHC, b) determine and develop an antibody (or antibody-like molecule) how to recognize specifically that (and not the non-mutated version which will be present in healthy cells), then c) either genetically engineer the patient's immune cells to recognize it, or hijack the recognition process in some other manner.
Each of these steps is very very difficult and we're only just developing the computational and experimental tools to do these for any patient, let alone every cancer patient.
I won't go into it, but you also have to think about cancer as a living organism susceptible to evolution so if you don't hit hard and fast to wipe it out all at once, you select for mutations that evade your treatment. This is made especially challenging considering that cancer usually has some mechanism gone awry that leads to increased growth and mutation rates so that they're even MORE likely to evade your treatments then a generic cell - think of it kind of like antibiotic resistance in bacteria
As another layperson who only has a college freshman-level understanding of Biology 101, I don't think that's going to be simple.
Immune cells look for -expressions- on the surface of a cell to tell them whether a cell is wonky or not. Typically, these are proteins. These proteins are encoded by DNA, yes, but it's not going to be as simple as diffing the DNA between a regular cell and a cancerous cell because a simple diff like that won't tell you what will get expressed as a key cancer cell surface protein.
DNA gets interpreted as mRNA which then acts as instructions to build long strands of amino acids. These amino acid chains then fold (in hard to calculate ways) into proteins. There's a whole set of other machinery in the cell that regulates how those proteins behave once they're constructed.
TLDR, there's multiple compilation steps to go from a DNA to protein, and then a whole host of runtime monkeypatching to get proteins from A to B.
One of my first thoughts when GPT-3 came out was that the value of "curated gardens" of quality data (wikipedia, SO) is going to become immensely more valuable because of this problem. If you pollute the source of training data it eventually becomes worthless for training better models in the future
I'm no expert, but probably because coding on Windows is a PITA, consumer Linux is unreliable (relying almost entirely on open-source - a great thing but doesn't have the same driving force), and Mac runs a Unix operating system.