Hacker Newsnew | past | comments | ask | show | jobs | submit | mmsc's commentslogin

Instagram blocks me from sending Facebook.com in DMs to people. No idea why and support doesn't help.

Orwell's Down and Out in Paris and London documented some of the swear words of his time [0].

It's interesting reading them as a native speaker, as there's so few that I could even begin to guess what they mean.

[0]: https://www.telelib.com/authors/O/OrwellGeorge/prose/Downand...


No. HTTPS certificates are being abused for non-https purposes. CAs want to sell certificates for everything under the sun, and want to force those in the ecosystem to support their business, even though https certificates are not designed to be used for other things (mail servers for example).

If CAs don't want hostility from browser companies for using https certificate for non-http/browser applications, they should build their own thing.


They weren't "HTTPS certificates" originally, just certificates. They may be "HTTPS certificates" today if you listen to some people. However there was never a line drawn where one day they weren't "HTTPS certificates" and the next day they were. The ecosystem was just gradually pushed in that direction because of the dominance of the browser vendors and the popularity of the web.

I put "HTTPS certificates" in quotes in this comment because it is not a technical term defined anywhere, just a concept that "these certificates should only be used for HTTPS". The core specifications talk about "TLS servers" and "TLS clients".


The CAB is only concerned with the WebPKI. This means HTTPS.

There's loads of non web, non HTTPS TLS use cases, it's just the CAB doesn't care about those (why should it?).


This is technically true, and nobody contested the CABF's focus on HTTPS TLS.

However, eventually, the CABF started imposing restrictions on the public CA operators regarding the issuance of non-HTTPS certificates. Nominally, the CAs are still offering "TLS certificates", but due to the pressure from the CABF, the allowed certificates are getting more and more limited, with the removal of SRVname a few years ago, and the removal of clientAuth that this thread is about.

I can understand the CABF position of "just make your own PKI" to a degree, but in practice that would require a LetsEncrypt level of effort for something that is already perfectly provided by LetsEncrypt, if it wouldn't be for the CABF lobbying.


> CABF started imposing restrictions on the public CA operators regarding the issuance of non-HTTPS certificates.

The restriction is on signing non web certificates with the same root/intermediate as is part of the WebPKI.

There's no rule (that I'm aware of?) that says the CAs can't have different signing roots for whatever use-case that are then trusted by people who need that use case.


> The CAB is only concerned with the WebPKI. This means HTTPS.

[citation needed]

The title of their current (2.2.2) standard is "Baseline Requirements for the Issuance and Management of Publicly‐Trusted TLS Server Certificates":

* https://cabforum.org/working-groups/server/baseline-requirem...

§1.3, "PKI Participants", states:

> The CA/Browser Forum is a voluntary organization of Certification Authorities and suppliers of Internet browser and other relying‐party software applications.

IMHO "other relying-party software applications" can include XMPP servers (also perhaps SMTP, IMAP, FTPS, NNTP, etc).


> [citation needed]

My citation is the membership of the CAB.

> IMHO "other relying-party software applications" can include XMPP servers (also perhaps SMTP, IMAP, FTPS, NNTP, etc).

This may be your opinion, but what's the representation of XMPP etc. software maintainers at the CAB?


>> [citation needed]

> My citation is the membership of the CAB.

It is a single member of the CAB that is insisting on changing the MAY to a MUST NOT for clientAuth. Why does that single member, Google-Chrome, get to dictate this?

Has Mozilla insisted on changing the meaning of §1.3 to basically remove "other relying‐party software applications"? Apple-Safari? Or any other of the "Certificate Consumers":

* https://cabforum.org/working-groups/server/#certificate-cons...

The membership of CAB collectively agree to the requirements/restrictions they places on themselves, and those requirements (a) state both browser and non-browser use cases, and (b) explicitly allow clientAuth usage as a MAY; see §7.1.2.10.6, §7.1.2.7.10:

* https://cabforum.org/working-groups/server/baseline-requirem...


> CAs want to sell certificates for everything under the sun

A serious problem with traditional CAs, which was partly solved by Let's Encrypt just giving them away. Everyone gradually realized that the "tying to real identity" function was both very expensive and of little value, compared to what people actually want which is "encryption, with reasonable certainty that it's not MITMd suddenly".


No. These are just certificates that happen to be used predominantly in HTTPS context and Google tries to tie them exclusively to the HTTPS context.

Where did you get that idea? These certs have always been intended for any TLS connection of any application. They are also in no way specific or "designed for" HTTPS. Neither the industry body formed from the CAs and software vendors, nor the big CAs themselves are against non-HTTPS use.

From https://cabforum.org/

> Welcome to the CA/Browser Forum > > The Certification Authority Browser Forum (CA/Browser Forum) is a voluntary gathering of Certificate Issuers and suppliers of Internet browser software and other applications that use certificates (Certificate Consumers).

From https://letsencrypt.org/docs/faq/

> Does Let’s Encrypt issue certificates for anything other than SSL/TLS for websites? > > Let’s Encrypt certificates are standard Domain Validation certificates, so you can use them for any server that uses a domain name, like web servers, mail servers, FTP servers, and many more.


You’re like, so wrong.

Are we really at an age where people don’t remember that SSL was intended for many protocols, including MAIL?!

Do you think email works on web technology because you use a web-client to access your mailbox?

Jesus christ, formal education needs to come quickly to our industry.


PKI certificates weren't even intended for SSL, it predates even that.

X.509 was published in November 25, 1988 ; version 3 added support for "the web" as it was known at the time. One obvious use was for X.400 e-mail systems in the 1980s. Novell Netware adopted x.509.

It was originally intended to use with X.511 "Directory Access Protocol", which LDAP was based on. You can still find X.500 heritage in Microsft Exchange and Active Directory, although it's getting less over time and e.g. EntraID only has some affordances for backward compatibility.


> Jesus christ, formal education needs to come quickly to our industry.

It just went away, upset. It might never come back.


Every single Ivanti product (including their SSL-VPN) should be considered a critical threat. The fact that this company is allowed to continue to sell their malware dressed-up as "security solutions" is a disaster. How they haven't been sued into bankruptcy is something I'll never understand.

The purpose of cybersecurity products and companies is not to sell security. It's to sell the illusion of security to (often incompetent) execs - which is perfectly fine because the market doesn't actually punish security breaches so an illusion is all that's needed. It is an insanely lucrative industry selling luxury-grade snake oil.

Actual cybersecurity isn't something you can just buy off-the-shelf and requires skill and making every single person in the org to give a shit about it, which is already hard to achieve, and even more so when you've tried for years to pay them as little as you can get away with.


Actually there is a significant push to more effective products coming from the reinsurance companies that underwrite cyber risks. Most of them come with a checklist of things you need to have before they sign you at any reasonable price. The more we get government regulation for fines in cases of breaches etc. the more this trend will accelerate.

The thing is that real security isn't something that a checklist can guarantee. You have to build it into the product architecture and mindset of every engineer that works on the project. At every single stage, you have to be thinking "How do I minimize this attack surface? What inputs might come in that I don't expect? What are the ways that this code might be exploited that I haven't thought about? What privileges does it have that it doesn't need?"

I can almost guarantee you that your ordinary feature developer working on a deadline is not thinking about that. They're thinking about how they can ship on time with the features that the salesguy has promised the client. Inverting that - and thinking about what "features" you're shipping that you haven't promised the client - costs a lot of money that isn't necessary for making the sale.

So when the reinsurance company mandates a checklist, they get a checklist, with all the boxes dutifully checked off. Any suitably diligent attacker will still be able to get in, but now there's a very strong incentive to not report data breaches and have your insurance premiums go up or government regulation come down. The ecosystem settles into an equilibrium of parasites (hackers, who have silently pwned a wide variety of computer systems and can use that to setup systems for their advantage) and blowhards (executives who claim their software has security guarantees that it doesn't really).


> but now there's a very strong incentive to not report data breaches and have your insurance premiums go up or government regulation come down

I would argue the opposite is true. Insurance doesn’t pay out if you don’t self-report in time. Big data breaches usually get discovered when the hacker tries to peddle off the data in a darknet marketplace so not reporting is gambling that this won’t happen.


Curious how the compromised company can report if the compromise has not been detected

There need to be much more powerful automated tools. And they need to meet critical systems where they are.

Not very long ago actual security existed basically nowhere (except air-gapping, most of the time ;)). And today it still mostly doesn't because we can't properly isolate software and system resources (and we're very far away from routinely proving actual security). Mobile is much better by default, but limited in other ways.

Heck, I could be infected with something nasty and never know about it: the surface to surveil is far too large and constantly changing. Gave up configuring SELinux years ago because it was too time-consuming.

I'll admit that much has changed since then and I want to give it a go again, maybe with a simpler solution to start with (e.g. never grant full filesystem access and network for anything).

We must gain sufficiently powerful (and comfortable...) tools for this. The script in question should never have had the kind of access it did.


> The thing is that real security isn't something that a checklist can guarantee.

I've taken this even further. You cannot do security with a checklist. Trying to do so will inevitably lead to bad outcomes.

Couple of years back I finally figured out how to dress this in a suitably snarky soundbite: doing security with a spreadsheet is like trying to estimate the health of a leper colony by their number of remaining limbs.


You are asserting that security has to be hand-crafted. That is a very strong claim, if you think about it.

Is it not possible to have secure software components that only work when assembled in secure ways? Why not?

Conversely, what security claims about a component can one rely upon, without verifying it oneself?

How would a non-professional verify claims of security professionals, who have a strong interest in people depending upon their work and not challenging its utility?


Not the person you are responding to, but: I would agree that at the stage of full maturity of cybersecurity tooling and corporate deployment, configuration would be canonical and painless, and robust and independent verification of security would be possible by less-than-expert auditors. At such a stage of maturity, checklist-style approaches make perfect sense.

I do not think we're at that stage of maturity. I think it would be hubris to imitate the practices of that stage of maturity, enshrining those practices in the eyes of insurance underwriters.


Corporate security is beyond merely making sure software itself is secure.

Phishing for example requires no security vulnerabilities, and is one of the primary initial attack vectors into a company.

You need proper training and the right incentives for people to actually care and think before they act.


You’re making many assumptions which fit your worldview.

I can assure you that insurers don’t work like that.

If underwriting was as sloppy as you think it is insurance as a business model wouldn’t work.


Err, cybersecurity insurance as a business model has not worked. I have seen analyst reports showing that there have been multiple large claims that are each individually larger than all premiums ever collected industry wide. Those same reports indicated that all the large cybersecurity insurance vendors were basically no longer issuing policies with significant coverage, capping out at the few million dollar range. Cybersecurity insurance is picking up pennies in front of a steamroller; you wonder why no one else is picking up this free money on the ground until you get crushed.

Note, that is not to say that cybersecurity insurance if fundamentally impossible, just that the current cost structure and risk mitigation structure is untenable and should not be pointed at as evidence of function.


The financial sector is famously sloppy and it’s still doing just fine.

Holy those checklists are the bane of my existence. For example demanding 2FA for email, which is impossible if you self host, unless you force everyone to use RoundCube, but then you have to answer to the CEO why he can’t get email on his iPhone in the mail app.

Or just loads of other stuff that really only applies to large Fortune 500 size companies. My small startups certainly don’t have a network engineer on staff who has created a network topology graph and various policies pertaining to it, etc etc. the list goes on, I could name 100s of absurd requirements these insurance companies want that don’t actually add any level of security to the organization, and absolutely do not apply to small scale shops.


And... this is why the hyperscale cloud is such a compelling choice, even though it costs 10x what running your own servers would cost.

Adding the security feature(s) you need is just a +$100/m checkbox, and they generally have sane defaults or templates that will position you better than some 3rd party vendor with confusing documentation and infrequent updates that require downtime windows to apply.


Why is 2FA impossible if you self host?

IMAP is ancient and in its own does not support 2FA. You could do it with webmail clients but you can’t do it with plain ol’ IMAP. I have seen some attempts at it where the password is concatenated with the TOTP, but the nature of mail clients frequent polling means users would be constantly hammers with requests to reauthenticate. There is an RFC for OAUTH2 BEARER support and there are even some servers which support it (eg Stalwart IIRC) however there are literally zero clients which support it (AFAIK). And you especially can’t use any of the main top 10 email clients that most people use, there may be some small obscure mail client that supports it, but even Thunderbird lacks support.

I'm mostly with you (see my other comment) but MFA on email really is table stakes and your CEO will be the first to be phished without it.

I like to implement independent mail systems. No SSO BS. IT enters the password into the mail client while setting up the laptop and phone. The boss can't be phished if he doesn't know his password (or if the password has no use on the internet).

I also like to put everything behind a VPN (again no SSO). But the bigger the company gets, sooner or later this will come to an end. Because it's not "best practice" to not be phishable. Apparently what is needed are layers and layers of BS "security" products that can be tricked by a kid that has heard of JS. https://browser.security


Those checklists are frequently answered like this:

"Hey it says we need to do mobile management and can't just let people manage their own phones. Looks like we'll buy Avanti mobile manager". Same conversation I've seen play out with generally secure routers being replaced with Fortigates that have major vulnerabilities every week because the checklist says you must be doing SSL interception.


I think, to add to the comment, the whole raison d'être of zero days is that an (exploitable) bug has been found that the producer of the software is not aware of/has not produced a patch for.

It's fine to say "Look this is bad, don't do" and "A patch was issued for this, you are responsible" but when some set of circumstances arises that has not been thought about before that cause a problem, then there's nothing that could have been done to stop it.

Note that the entire QA industry is explicitly geared to try and look at software being produced in a way that nobody else has thought to, in order to find if that software still behaves "correctly", and <some colour of hat> hackers are an extension of that - people looking at software in a way that developers and QA did not think of.. etc


Defense in depth and multiple layers of security should ideally protect against zero-days; see the Swiss cheese model of accidents for an example; most aviation accidents are rarely caused by a single factor but an improbable combination of factors.

This is why I also think “zero trust” and internet-accessible SaaS has done so much damage to the industry. Before, if your version control server has a vuln, the attackers still need to get on your VPN to even be able to scan for that vuln. Now, your version control server is on the internet and/or is an SaaS and all it takes is an exploit or a set of phished credentials for anyone anywhere in the world to get in.


> Defense in depth and multiple layers of security should ideally protect against zero-days

Absolutely agree, and that's why instant security in a can (just add water!) cannot work (as you have been saying)


It's also selling box checks for various certifications.

So true. Can't wait for NIS2 to be implemented in my location (EU); the new directive allows authorities to hold board members and CEOs personally responsible for cybersec fails (although only as a last resort, after trying other means).

If crowdstrike is any indicator, expect Ivanti stock to go up now. Seems to be the mo for security companies. Fuck up, get paid.

There is no bad publicity? I take few had heard of them before so this is free marketing putting the name in public. Or then there is some broken LLM based sentiment analysis bot that automatically buy companies in news...

Well, next week there will be a similar vulnerability Fortinet and everyone will momentarily forget about Ivanti again :-)

Yes. These companies should be shut down in the name of national security, seriously.

> How they haven't been sued into bankruptcy is something I'll never understand.

Isn't most off-the-shelf software effectively always supplied without any kind of warranty? What grounds would the lawsuit have?


Suing for negligence and friends is how car companies -- when it is found out they've built something highly unsafe/dangerously broken -- happens. I don't see the difference.

In most cases, you can't evade liability for negligence that results in personal injury. You can usually disclaim away liability for other types of damage caused by negligence.

That sounds scarily like you're describing FaultyGate. Is there any company in this space that doesn't sell crap products?

If the political messages said "gas the Jews", "exterminate the Ukrainians and give Ukraine to Russia", and "Taiwan has and always will be a province of china", you probably wouldn't use notepad++.


> Taiwan has and always will be a province of china

You know that's official position of 99% countries in the world, including all superpowers and every NATO member?


Only officially, because it's a requirement to retain trade relationships with China and China makes everything.

Everyone, including 99% of the world's politicians that don't have their heads up their asses, including the ones who wrote the official positions that Taiwan is not a country, knows Taiwan is a country.


No it's not and if you do believe that, you are taking an overly reductionist viewpoint.

99% countries, as they say, "acknowledge China's viewpoint".


~120 countries fully endorse One China Policy. ~60 acknowledge. ~10 recognize ROC.



Yes, comports with my numbers.

>A majority of countries (119 or 62 per cent of UN member states) have endorsed Beijing’s one-China principle, which entails that Taiwan is an inalienable part of the People’s Republic of China.

I was being generous bucketing 20 mixed signallers with 40 status quoist. 120 agree TW inalienable part of China, as in TW can never be independent from one China construct (PRC's position). 20 agree it's part of China but not necessarily inalienable, i.e. TW/ROC should have pathway to independence but until they formalize, still part of China. AKA 75% is in recognize tier.


>If the political messages said "gas the Jews", "exterminate the Ukrainians and give Ukraine to Russia", and "Taiwan has and always will be a province of china", you probably wouldn't use notepad++.

As one should, I avoid stuff that have a very loud fascist author/owner. So we should be happy for this people to show what they believe in, this way we can decide not to help fascists(and others can decide to support them and not to help one of the other sides)


mocp is all you need


fully agree!


Nothing? Don't forget when their security team sent pigs heads to people and terrorized them:)


I think I missed this…



Wow! That was a read that kept on escalating.


The article was clearly written by an LLM. It would make no sense to use https for a challenge like that, indeed.


(2024).

There are other vulnerabilities in that library too. I reported some (with some PRs) https://github.com/indutny/elliptic/pull/338, https://github.com/indutny/elliptic/pull/337, https://github.com/indutny/elliptic/issues/339 but I assume they'll never get fixed.

The library is dead and should be marked as vulnerable on npmjs tbh.


Metropolis becoming public domain in 2026 couldn't be more perfect, since the film is set in 2026.

It is eerily similar to our times, too, unfortunately.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: