> It would require half of one distribution center of a major retailer.
That also meets the specifications of a clean room, and is actively maintained as one?
If OpenAI bought 40% of the annual capacity of finished memory, with the goal of using it in their server farms ASAP, that's one thing.
But unfinished wafers that still need to be protected to finish the manufacturing process, that OpenAI itself does not have any capability to do?
That to me looks like a preemptive strike against competitors, which also affects any other industry that requires RAM, in an attempt to develop a monopolistic position.
I can't see how this isn't a massive national security issue for any country that needs devices requiring RAM for new systems and maintenance of existing ones (pretty much all of them...) to manage critical infrastructure, national defence, public and social services, and so on.
Instead of anti-cheat on the client side, which 'kernel level' anti-cheat still gets beaten, why not look at a wholly server-side statistical analysis of player stats?
Make the anti-cheat follow one of the key tenets of cybersecurity: never trust user input.
Should work reasonably well for first-person shooters.
You're ranked based on the game's inner metrics, calculated on the server, rather than player-visible ones:
- time from observation to target contact with crosshairs (inc. standard deviation, standard error)
- % of time successfully striking specific hit boxes
and so on (depending on the game).
These rankings don't affect your public rankings but do affect who you're match-made with, or games that don't have private server options. Eventually you'll only be match-made with other cheaters, or players so good they give cheaters a genuine run for their money.
For games that do have private server options it could be configurable to:
- flag the player to admins as 'suspicious' for investigation
- Start ignoring hits that would've otherwise been successful for statistical outlier behaviours (where the statistical analysis indicates a significant disparity between observed and known-human-like reaction times and behaviours).
Part of the up-time solution is keeping as much of your app and infrastructure within your control, rather than being at the behest of mega-providers as we've witnessed in the past month: Cloudflare, and AWS.
Probably:
- a couple of tower servers, running Linux or FreeBSD, backed up by a UPS and an auto-run generator with 24 hours worth of diesel (depending on where you are, and the local areas propensity for natural disasters - maybe 72 hours),
- Caddy for a reverse proxy, Apache for the web server, PostgreSQL for the database;
- behind a router with sensible security settings, that also can load-balance between the two servers (for availability rather than scaling);
- on static WAN IPs,
- with dual redundant (different ISPs/network provider) WAN connections,
- a regular and strictly followed patch and hardware maintenance cycle,
- located in an area resistant to wildfire, civil unrest, and riverine or coastal flooding.
I'd say that'd get you close to five 9s (no more than ~5 minutes downtime per year), though I'd pretty much guarantee five 9s (maybe even six 9s - no more than 32 seconds downtime per year) if the two machines were physically separated from each other by a few hundred kilometres, each with their own supporting infrastructure above, sans the load balancing (see below), through two separate network routes.
Load balancing would become human-driven in this 'physically separate' example (cheaper, less complex): if your-site-1.com fails, simply re-point your browser to your-site-2.com which routes to the other redundant server on a different network.
The hard part now will be picking network providers that don't use the same pipes/cables, i.e. they both use Cloudflare, or AWS...
Keep the WAN IPs written down in case DNS fails.
PostgreSQL can do master-master replication, but it's a pain to set up I understand.
Because javascript programmers are cheaper/easier/whatever to hire? So everything becomes web-centric. (I'm hoping for this comment to be sarcastic but I wouldn't be surprised if it turns out not to be)
> During our attempts to remediate, we have disabled WARP [their VPN service] access in London. Users in London trying to access the Internet via WARP will see a failure to connect.
Posted 4 minutes ago. Nov 18, 2025 - 13:04 UTC
> We have made changes that have allowed Cloudflare Access [their 'zero-trust network access solution'] and WARP to recover. Error levels for Access and WARP users have returned to pre-incident rates.
> We have re-enabled WARP access in London.
> We are continuing to work towards restoring other services.
> Posted 12 minutes ago. Nov 18, 2025 - 13:13 UTC
Now I'm really suspicious that they were attacked...
Someone running cloudflared accidentally advertising a critical route into their Warp namespace and somehow disrupting routes for internal Cloudflare services doesn't seem too far fetched.
> "In response to the demand for advanced AI tools, we introduced AI capabilities into the Microsoft 365 Personal and Family subscriptions that we offer in Australia,"
The 'demand' here is that Microsoft is demanding you use Copilot and be forced to pay for it, unless you 'opt-out' by threatening to cancel.
"In hindsight" my arse. They knew what they were doing, and are only now 'apologising' after being taken to court over it.
Yes this makes me so mad. I already pay for a subscription to Claude. Why on earth would I want to pay Microsoft for their shitty wrapper around ChatGPT.
A key part of secure systems is availability...
It really looks like vibe-coding.
reply