Hacker Newsnew | past | comments | ask | show | jobs | submit | drysart's commentslogin

CSS isn't the security issue here. IFrames are the security issue; and have been since the first day they were added to a browser.

"Fail open" state would have been improper here, as the system being impacted was a security-critical system: firewall rules.

It is absolutely the wrong approach to "fail open" when you can't run security-critical operations.


Cloudflare is supposed to protect me from occasional ddos, not take my business offline entirely.

This can be architected in such a way that if one rules engine crashes, other systems are not impacted and other rules, cached rules, heuristics, global policies, etc. continue to function and provide shielding.

You can't ask for Cloudflare to turn on a dime and implement this in this manner. Their infra is probably very sensibly architected by great engineers. But there are always holes, especially when moving fast, migrating systems, etc. And there's probably room for more resiliency.


Then even in the worst case scenario, they were addressing this issue two days after it was publicly disclosed. So this wasn't a "rush to fix the zero day ASAP" scenario, which makes it harder to justify ignoring errors that started occuring in a small scale rollout.

> The only thing I'm not sure about is why they decided to reopen it.

It's almost certainly due to the PDF Association adding JPEG XL as a supported image format to the ISO standard for PDFs; considering Google's 180 on JPEG XL support came just a few days after the PDF Association's announcement.


That would make sense, since they would then need support for JXL for the embedded PDF viewer anyway. Unless they want it to choke on valid PDFs that include JXL images.

I see! Thanks for pointing out this very interesting correlation. So we got something better only because someone else equally influential forced their hand. Otherwise the users be damned, for all they care, it seems.

Notarization doesn't involve any sort of editorial control. It's just a virus scanner that's run up front and then stapling an attestation to your application that it passed the scan. It does not involve looking at the content of your app and making any value judgements about it; it's purely an automated static analysis system checking your application for known malicious code.


This is just factually incorrect. See: https://9to5mac.com/2024/06/09/apple-blocks-pc-emulator-utm-...

UTM wasn't denied notarization because some virus scanner found that it was a virus, but because it violated App Store guidelines. That's editorial control.


You're talking about notarization on macOS. Notarization on iOS is vastly different. On iOS, notarization is more or less App Store review but with fewer rules.


Honestly, iOS notarization really muddied the waters. IMO, because Apple decided to name them the same and thus presumably considers them the same, we should be just as critical of and worried about notarization on the Mac as we are of notarization on iOS.


Azure Trusted Signing is a crapshoot. If you can get running, it's easy and fast and great. But if you run into any problems at all during the setup process (and you very well might since their onboarding process is held together with duct tape and twine), you're basically left for dead and unless you're on an enterprise support plan, you're not going to get any help from them at all.


The length of time notarization takes depends primarily upon how large and complicated your app is, and how different is from previous versions of the same application you've previously notarized. The system seems to recognize large blocks of code that it's already analyzed and cleared and doesn't need to re-analyze. How much your binary churns between builds can greatly influence how fast your subsequent notarizations are.

A brand new developer account submitting a brand new application for notarization for the first time can expect the process might take a few days; and it's widely believed that first time notarizations require human confirmation because they do definitely take longer if submitted on a weekend or on a holiday. This is true even for extremely small, trivial applications. (Though I can tell you from personal experience that whatever human confirmation they're doing isn't very deep, because I've had first time notarizations on brand new developer accounts get approved even when notarizing a broken binary that doesn't actually launch.)

And of course sometimes their servers just go to shit and notarizations across the board all take significantly longer than normal, and it's not your fault at all. Apple's developer tooling support is kinda garbage.


> I've had first time notarizations on brand new developer accounts get approved even when notarizing a broken binary that doesn't actually launch

https://developer.apple.com/documentation/security/notarizin... (emphasis added):

“Notarize your macOS software to give users more confidence that the Developer ID-signed software you distribute has been checked by Apple for malicious components. _Notarization_of_macOS_software_is_not_App_Review. The Apple notary service is an automated system that scans your software for malicious content, checks for code-signing issues, and returns the results to you quickly.”

⇒ It seems notarization is static analysis, so they don’t need to launch the process.

Also, in some sense a program that doesn’t launch should pass notarization because, even though it may contain malware, that’s harmless because it won’t run.


Notarization doesn't blanket block all access to private APIs; but the notarization process may look for and block certain known accesses in certain cases. This is because notarization is not intended to be an Apple policy enforcement mechanism. It's intended to block malicious software.

So in other words, using private APIs in and of itself isn't an issue. Neither is it an issue if your application is one that serves up adult content, or is an alternate App Store, or anything else that Apple might reject from its own App Store for policy reasons. It's basically doing what you might expect a virus scanner to do.


Yeah, don't disagree with any of that, but I'm looking for explicit evidence that that is true (right now it sounds like it's just an assumption)? E.g., either examples of apps failing notarization due to API calls, or Apple explicitly saying that they analyze API calls. Without that it sounds like we're just guessing?


There's a significant difference between a site being useless because it just doesn't have the breadth yet to cover the topic you're looking for (as in early Wikipedia); versus a site being useless by not actually having facts about the topic you're looking for, yet spouting out authoritative-looking nonsense anyway.


> versus a site being useless by not actually having facts about the topic you're looking for, yet spouting out authoritative-looking nonsense anyway.

You just described Wikipedia early on before it had much content, rules around weasel words, original research, etc


Wikipedia early on wasn't competing against Wikipedia, it was competing against hardcover encyclopedias. There was clear value-add from being able to draw from a wider range of human expertise and update on a quicker cadence.

In a world where Wikipedia already exists, there's no similar value-add to Grokipedia. Not only is it useless today, there is nothing about the fundamental design of the site that would lead me to believe that it has any path to being more authoritative or accurate than Wikipedia in the future - ever.


PROD isn't capitalized because it's an initialism. It's capitalized because the machine is screaming at you that this is production, be careful. ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: