Hacker Newsnew | past | comments | ask | show | jobs | submit | misswaterfairy's commentslogin

> but decided to ignore because they were rolling out a security fix.

A key part of secure systems is availability...

It really looks like vibe-coding.


> It would require half of one distribution center of a major retailer.

That also meets the specifications of a clean room, and is actively maintained as one?

If OpenAI bought 40% of the annual capacity of finished memory, with the goal of using it in their server farms ASAP, that's one thing.

But unfinished wafers that still need to be protected to finish the manufacturing process, that OpenAI itself does not have any capability to do?

That to me looks like a preemptive strike against competitors, which also affects any other industry that requires RAM, in an attempt to develop a monopolistic position.

I can't see how this isn't a massive national security issue for any country that needs devices requiring RAM for new systems and maintenance of existing ones (pretty much all of them...) to manage critical infrastructure, national defence, public and social services, and so on.


You do not need clean room specifications for storing wafers.

Instead of anti-cheat on the client side, which 'kernel level' anti-cheat still gets beaten, why not look at a wholly server-side statistical analysis of player stats?

Make the anti-cheat follow one of the key tenets of cybersecurity: never trust user input.

Should work reasonably well for first-person shooters.

You're ranked based on the game's inner metrics, calculated on the server, rather than player-visible ones:

- time from observation to target contact with crosshairs (inc. standard deviation, standard error)

- % of time successfully striking specific hit boxes

and so on (depending on the game).

These rankings don't affect your public rankings but do affect who you're match-made with, or games that don't have private server options. Eventually you'll only be match-made with other cheaters, or players so good they give cheaters a genuine run for their money.

For games that do have private server options it could be configurable to:

- flag the player to admins as 'suspicious' for investigation

- Start ignoring hits that would've otherwise been successful for statistical outlier behaviours (where the statistical analysis indicates a significant disparity between observed and known-human-like reaction times and behaviours).

Even if cheats were used on the server, their


Part of the up-time solution is keeping as much of your app and infrastructure within your control, rather than being at the behest of mega-providers as we've witnessed in the past month: Cloudflare, and AWS.

Probably:

- a couple of tower servers, running Linux or FreeBSD, backed up by a UPS and an auto-run generator with 24 hours worth of diesel (depending on where you are, and the local areas propensity for natural disasters - maybe 72 hours),

- Caddy for a reverse proxy, Apache for the web server, PostgreSQL for the database;

- behind a router with sensible security settings, that also can load-balance between the two servers (for availability rather than scaling);

- on static WAN IPs,

- with dual redundant (different ISPs/network provider) WAN connections,

- a regular and strictly followed patch and hardware maintenance cycle,

- located in an area resistant to wildfire, civil unrest, and riverine or coastal flooding.

I'd say that'd get you close to five 9s (no more than ~5 minutes downtime per year), though I'd pretty much guarantee five 9s (maybe even six 9s - no more than 32 seconds downtime per year) if the two machines were physically separated from each other by a few hundred kilometres, each with their own supporting infrastructure above, sans the load balancing (see below), through two separate network routes.

Load balancing would become human-driven in this 'physically separate' example (cheaper, less complex): if your-site-1.com fails, simply re-point your browser to your-site-2.com which routes to the other redundant server on a different network.

The hard part now will be picking network providers that don't use the same pipes/cables, i.e. they both use Cloudflare, or AWS...

Keep the WAN IPs written down in case DNS fails.

PostgreSQL can do master-master replication, but it's a pain to set up I understand.


If they do use Cloudflare... why in the everlasting name of Hell did they connect a railway control and signalling system to the Internet?!!!


Because javascript programmers are cheaper/easier/whatever to hire? So everything becomes web-centric. (I'm hoping for this comment to be sarcastic but I wouldn't be surprised if it turns out not to be)


Don't say that, you might hurt the JS devs feelings


> During our attempts to remediate, we have disabled WARP [their VPN service] access in London. Users in London trying to access the Internet via WARP will see a failure to connect. Posted 4 minutes ago. Nov 18, 2025 - 13:04 UTC

Is Cloudflare being attacked...?


This line also gave me that vibe


> We have made changes that have allowed Cloudflare Access [their 'zero-trust network access solution'] and WARP to recover. Error levels for Access and WARP users have returned to pre-incident rates. > We have re-enabled WARP access in London.

> We are continuing to work towards restoring other services. > Posted 12 minutes ago. Nov 18, 2025 - 13:13 UTC

Now I'm really suspicious that they were attacked...


I will bet it's routing misconfig.


It always is.


Someone running cloudflared accidentally advertising a critical route into their Warp namespace and somehow disrupting routes for internal Cloudflare services doesn't seem too far fetched.

We vibe coded a tool to mass disconnect Cloudflare Warp for incident responders: https://github.com/aberoham/unwarp

To go along with the shenanigans around dealing with MITM traffic inspection https://github.com/aberoham/fuwarp


The whole diff in that (pointless?) commit...:

  * Note: Due to design limitations in both libpq and PostgreSQL's
- * architecture, this implementation does not support the memory-bounded

- * streaming behavior available with MySQL and Oracle.

+ * architecture, this implementation does not support the superior

+ * memory-bounded streaming behavior we use with MySQL and Oracle.


> I thought SPARK was a paid (not free) license. Am I mistaken?

Similar model to Qt: permissive licensed open source version, with a commercial 'Pro' offering.

https://en.wikipedia.org/wiki/SPARK_(programming_language)

https://alire.ada.dev/transition_from_gnat_community.html


> "In response to the demand for advanced AI tools, we introduced AI capabilities into the Microsoft 365 Personal and Family subscriptions that we offer in Australia,"

The 'demand' here is that Microsoft is demanding you use Copilot and be forced to pay for it, unless you 'opt-out' by threatening to cancel.

"In hindsight" my arse. They knew what they were doing, and are only now 'apologising' after being taken to court over it.


Yes this makes me so mad. I already pay for a subscription to Claude. Why on earth would I want to pay Microsoft for their shitty wrapper around ChatGPT.


Awesome that the ACCC, Australia's consumer watchdog, is taking this up.

It's really shitty that companies believe they can pull these stunts and get away with it.


The ACCC is actually quite switched on to any misleading conduct.

They have gone after Airbnb / Airlines / Hotel Booking / Concert Tickets - for misleading conduct.

Especially business that use drip pricing (adding compulsory hidden fees later) or misleading prices like in the Microsoft case.

Anything sneaky - they're normally right on to it.


I recently learned this, but the reason Steam offers 2-hour no-questions-asked full refunds was partially because of a lawsuit by the ACCC


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: