I've been working in a non-tech role the past couple years hoping things would improve. Reading stuff like this, doesn't seem to be happening and makes it difficult to plan long term (but I haven't been actively applying).
I really liked that game, it had a lot of strategy to it and a decent story.
The multiplayer had a common key bug that made it hard on some maps. The atreides airdrones could fly at the edge of the map and not be targeted, and take out the harvester carryalls / and other air units
There were claims of a flame tank bug that could target any unit that was mentioned on the forums much much later, but I don't know if the details were ever revealed, I never figured it out so I guess its just theoretical..
2023 had a lot of trouble finding a tech job and even local meetup devs told me of layoffs, got into restaurant work where I had worked several years ago in college, because they remembered me.
Mostly physical job, I don't have to work the fryers which in some ways is kind of nice, and it's super low stress, just takes a lot of energy and leaves me drained in most of free time
In my experience It's very hard to get back in once you've been out of tech a while. Not because you can't code, but it becomes harder if not impossible to convince someone you can and current oversupply.
Super interesting. At one point thought control flow guard + DEP/ASLR was suppose to prevent this stuff, guess it can't be prevented nearly completely by now. Sounds like this took a lot of work to figure out, well done.
Any comment on reporting to Microsoft or perhaps motivation for this research?
So called "post-exploit" mitigations are practically always only hardening, i.e. making subsequent attacks harder (and fewer). Ideally much harder. But if you want an absolutely, provably (within limits, i.e. halting problem etc.) secure system, you have to eliminate bugs that can lead to any exploitable situations beforehand. In this case for example, that would mean no situation existing that could cause a buffer overflow in the first place. Memory-safe languages help for this case.
Obviously this is hard, so post-exploit mitigations will likely continue to still make things harder for attackers for quite a while at least.
Capabilities are a better security model, but don't protect you from kernel bugs. Provably correct kernels (such as seL4) do.
Having said that, being a microkernel, seL4 ends up pushing a bunch of potentially buggy code to use space. There are real benefits to that, but if you can exploit the page table server, the system is pretty much yours.
Problem is lack of money catches up eventually. If you can't figure out some way to get it, and the longer without a job, the harder it is to get one. I think a lot of people struggled because the economy and job numbers. Supply and demand..
I had contract work, then couldn't get a tech job a couple years ago after a lot of applications. Completely broke. Drove down the road and felt kinda foolish seeing people paying less but decent money for non-tech. got fast food / hospitality worker. Low stress, physical work. Can't imagine where I'd be if I didn't.
I kind of get it though, you start doing something else then you don't have much time and energy to improve on what you want to do (such as tech) for a while, a recipe for people to get trapped, unless you can save money and reclaim time somehow to improve, or the supply/demand shifts..
anyone could fail at anything, all I know for certain is the worst thing a person can do is nothing.
One solution, if you can, is to take part-time work. Then you can reduce your expenses to that level, take time off for living, without having to spend down your savings (or only very very slowly). Then it's sustainable.
I remember there is a bug with the hang glider that if you move down and up you gain momentum
Back in the day far cry 1 was ultra realistic graphics.
The game was fun to play, the most annoying thing is if you throw a rock or grenade where they can see if your stealth meter in instantly goes to zero and they start shooting you.
The game is riddled with small bugs, like you can knock the phone off the rock in the first level bunker then the cut scene will have him picking it up.
Having the trigens fight the people throughout the game was fun, and stealth aspect as well.
It's funny how the game emphasis seemed to be on the realistic mode, but it's like it was only tested on medium - the trigens would die very quickly and easily on realistic if the guards shot them, if I had to guess probably the bullet damage is turned up on that mode.
It's an interesting and legitimate question. But just as it takes business time to pick up on new technologies and adapt, it might take institutions as well. I've seen reddit posts of people getting writing flagged as AI written when it wasn't. The question is, how long until they start embracing it as a tool to use to write with?
Many of the institutions have billions of dollars put back. Harvard for example google tells me 53 billion. So I don't think they are just going to go away, they will adapt somehow.
It has a complex history but a lot of it started with the Prohibition historically, when alcohol sales ended some restaurants started allowing tips and it became more commonplace.
It always seemed to me the basics of a messenger app would be the easy part to be accomplished. It's the marketing that would be hard seeing as there are already existing solutions such as Slack and Discord.
If you're doing inference on neural networks, each weight has to be read at least once per token. This means you're going to read at least the size of the entire model, per token, at least once during inference.
If your model is 60GB, and you're reading it from the hard drive, then your bare minimum time of inference per token will be limited by your hard drive read throughput. Macbooks have ~4GB/s sequential read speed. Which means your inference time per token will be strictly more than 15 seconds.
If your model is in RAM, then (according to Apple's advertising) your memory speed is 400GB/s, which is 100x your hard drive speed, and just the memory throughput will not be as much of a bottleneck here.
There will be LLM specific chips coming to market soon which will be specialized to the task.
Tesla already has already been creating AI chips for their FSD features in their vehicles. Over the next years, everyone will be racing to be the first to put out LLM specific chips, with AI specific hardware devices following.
What exactly is the ideal sort of hardware to be able to run and train large models? Do you basically just need a high end version of basically everything?