To be honest, there are two ways to solve the problem of xkcd 2347, either putting efforts into the very small library or just stop depending on it. Both solutions are fine to me and Google apparent just choose the latter one here.
If not depending on a library is an option, then you dont really have an xkcd 2347 problem. The entire point of that comic is that some undermaintained dependencies are critical, without reasonable alternatives.
If being used in a CTF counts, then running latest docker with no extra privilege and non-root user on a reasonably up-to-date kernel meets the definition of secure I think. At least for what I have seen, this kind of infrastructure is pretty common in CTF.
For python specifically, the uuid4 function does use the randomness from os.urandom, which is supposed to be cryptographically random on most platforms.
I think the problem is that some local server are not really designed to be as secure as a public server. For example, a local server having a stupid unauthenticated endpoint like "GET /exec?cmd=rm+-rf+/*", which is obviously exploitable and same-origin does not prevent that.
Yes. Passkeys help with the bad password problem. That’s a big deal but doesn’t magically solve everything.
To address other security risks more comprehensively, you need to have a tight issuance process and use something key based in hardware. I’m working on a project where we deploy Yubi keys or similar, with an audit trial of which is used by who.
High trust environments need things like enterprise attestation and a solid issuance process to meet the control needs. Back in the day, the NIST standards required a chain of custody log of the token - you could only use in person delivery or registered mail to send them.
That’s overkill, but the point is the technology is only one part of the solution for these problems.
Within the larger spec, you can whitelist a set of known devices, such as only allow Yubikey's, etc. Which would prevent the private key material from getting into your password manager.
You can but the server can require an device attestation during registration, proving that you're actually using an Yubikey or whatever. That isn't possible with TOTP
It doesn't need it if this vulnerability is the only one you're worried about (remote websites), but it'd be nice to have it before letting it use e.g. your Github account. This is how VS Code extensions work, for example, and it's pretty nice
I think many people are just not really good at dealing with "imperfect" tools. Different tools can have different success probability, let's call that probability p here. People typically use tool that have p=100%, or at least very close to it. But LLM is a tool that is far from that, so making use of it takes different approach.
Imagine there is an probabilistic oracle that can answer any question with a yes/no with success probability p. If p=100% or p=0% then it is apparently very useful. If p=50% then it is absolutely worthless. In other cases, such oracle can be utilized in different way to get the answer we want, and it is still a useful thing.
One of the magic things about engineering is that I can make usefulness out of unreliability. Voltage can fluctuate and I can transmit 1s and 0s, lines can fizz, machines can die, and I can reliably send video from one end to the other.
Unreliability is something we live in. It is the world. Controlling error, increasing signal over noise, extracting energy from the fluctuations. This is life, man. This is what we are.
I can use LLMs very effectively. I can use search engines very effectively. I can use computers.
Many others can’t. Imagine the sheer fortune to be born in the era where I was meant to be: tools transformative and powerful in my hands; useless in others’.
Your point reminded me of Terrence Tao’s point that AI has a “plausibility problem”. When it can’t be accurate, it still disguises itself as accurate.
Its true success rate is by no means 100%, and sometimes is 0%, but it always tries to make you feel confident.
I’ve had to catch myself surrendering too much judgment to it. I worry a high school kid learning to write will have fewer qualms surrendering judgment
A scientific instrument that is unreliably accurate is useless. Imagine a kitchen scale that always gave +/- 50% every 3rd time you use it. Or maybe 5th time. Or 2nd.
So we're trying to use tools like this currently to help solve deeper problems and they aren't up to the task. This is still the point we need to start over and get better tools. Sharpening a bronze knife will never be as sharp or have the continuity as a steel knife. Same basic elements, very different material.
A bad analogy doesn't make a good argument. The best analogy for LLMs is probably a librarian on LSD in a giant library. They will point you in a direction if you have a question. Sometimes they will pull up the exact page you need, sometimes they will lead you somewhere completely wrong and confidently hand you a fantasy novel, trying to convince you it's a real science book.
It's completely up to your ability to both find what you need without them and verify the information they give you to evaluate their usefulness. If you put that on a matrix, this makes them useful in the quadrant of information that is both hard to find, but very easy to verify. Which at least in my daily work is a reasonable amount.
I really wonder how can use escape a container given a root shell created by `docker run --rm -it alpine:3 sh` without using a 0day? Using latest Docker and a reasonably up-to-date Linux kernel of course.
With the command above it is still possible to attack network targets, but let's just ignore it here. I just wonder how is it possible to obtain code execution outside the namespace without using kernel bugs.
Couldn't screen readers apply unicode normalization based some heuristics, like seeing the continuous presence of those special bold/italic characters? To improve accuracy, it can even check if the normalized text resembles to some English words or phrases or not.