Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He should have sat on the password. He should have watched for PRs, and started pushing updates immediately after they had received approval, and then merging. Instead, he panicked and kicked out all the maintainers, who realized the intrusion only 10 minutes after he gained access. And all he did was add `rm -rf /*` to build scripts, and the N word to the readme.

The malicious commits:

https://github.com/gentoo/gentoo/commit/e6db0eb4

https://github.com/gentoo/gentoo/commit/afcdc03b

https://github.com/gentoo/gentoo/commit/49464b73

https://github.com/gentoo/gentoo/commit/fdd8da2e

https://github.com/gentoo/gentoo/commit/e6db0eb4

https://github.com/gentoo/gentoo/commit/c46d8bbf

https://github.com/gentoo/gentoo/commit/50e3544d



He was likely not a person trying to do serious damage more than trying to have fun.


Putting "rm -rf /*" in every ebuild seems like a pretty clear indication of malicious intent.

Can't really picture anyone doing that as a "just trying to have fun" thing.


Yes, but malicious intent for the purpose of having fun watching the reaction. Not malicious intent for the purpose of personal gain, long term access, or government intelligence.


So chaotic neutral? If going by DnD.


More chaotic evil. Personally I'm pretty close to chaotic neutral - varies over time, sometimes true neutral, sometimes chaotic good - and just outright attempting destruction of unknown third parties seems pretty far towards the "evil" side of things.


I would say destroying random things for laughs is chaotic neutral. You do random chaotic things not for pleasure or pain. He isn't doing it for a specific reason but because he can. That's neutral.

A chaotic evil would be destroying out of greed or hate.


Hmmm, dunno. The only evidence being reported is that they locked out the project admins then attempted to destroy everything they could.

To me, that's not a "neutral" thing. It seems like it was only luck that the rm-ing didn't work, else there would be a bunch more unhappiness.

People do run Gentoo on production systems. Though they obviously shouldn't pull straight from upstream without some real testing before deployment, in the real world it does get done.


Hmmm, thinking about this bit more:

> He isn't doing it for a specific reason but because he can. That's neutral.

If the action is to just do something harmless ("echo 'Couldn't have rm -rf-d you there!", then sure it could be seen at neutral. But it was clearly an attempt to destroy other people's stuff or work. That really doesn't seem very neutral to me. :)


Well, it's arguably still personal gain.

It's just of the transient "I had fun from it" kind, instead of something with more tangible rewards (as you mention).

If it was just someone having a lark, there are plenty of ways to do that other than literally attempting to destroy everything they can access. ;)


It's the sort of "fun" that can land the malicious actor in prison.


Almost anything can land you in prison for years, nowadays. But that doesn't mean there's no difference in impact. Gentoo can count themselves as extraordinarily lucky precisely because there is a difference, between getting hit by a troll rather than a white-collar criminal.


Having "fun" by ruining quite a few people's day.

Malicious is malicious.


Looks like a student joke. I just dont know how many files were deleted in such a jokes during education )


forgot --no-preserve-root for it to really take effect properly.


I believe rm -rf /* will work without --no-preserve-root


GNU rm I believe requires the --no-preserve-root flag these days, to prevent that command from happening by accident


The shell expands

/*

, not rm.

It expands to /bin /etc /lib and such and is not covered by the sanity check.


Forcing any commit to be a PR merged by another dev would have solved it


While I agree this is a good practice, it wouldn't have helped much. He had organization-level admin access and could've easily added a second dev account to accept them by himself.


This part of the discussion started thinking about an attacker with a goal of the malicious code being undetected.


I dunno about "solved", but helped sure. I bet you could manage to get a commit in soon enough before someone else merged that they wouldn't notice the extra commit. Or even a history rewrite adding your code to the last real commit.


Do you know of any companies that do that? That seems like a big hassle.


This is standard practice where I work due to SOX compliance. (No pushing directly to master, all PRs need at least one other person's approval). In practice it's not an issue, since PRs are good practice anyway.


Yes but you can push to PRs after approval and then merge them.


You could revoke push permissions on that branch after the PR is requested though - there are probably tools that do that already.


For mine every feat/fix/refactor is a new branch which then requires 2 approvals to get merged, among other things (style guide enforced as linting, minimum test coverage threshold, etc ).

Tbh it's cool, coming from a previous job at a startup where version control meant just zipping the project from time to time.


Same here. All code goes in a Pull Request, and requires at least 1 approval.

Since Pull Request review is a high priority activity there's no "bottleneck", and you double the bus factor for free + prevent bad things from happening.


Yeah but you can still push commits to a PR that has received approval, and the approval is not revoked.


This is what happens where I work as well.


I don't know of company-wide policies like that (though I'm sure that they exist -- and I do know of individual teams that have such policies), but I do know of many projects which have such policies (for instance, all of the Open Container Initiative projects require two approvals from maintainers other than the author).


The company I am working for does that at least for the "junior" members on their core project. There is a bot that checks the tests pass and a senior dev has to approve the review.


Google.


I don't think it's appropriate to give advice to bad actors. Certainly everyone should be aware that silent attacks can and do occur. However it seems like a bad idea to post on a public forum ideas for how to better inflict damage.


Security through obscurity never works.

We should all be talking about the worst things that can be done, so we can make sure we are protected from them.


Good security should not depend on obscurity, but it does not mean that security through obscurity never works. It's still better than complete transparency.


> It's still better than complete transparency.

I consider it worse, since it's too easy for people to become content with it.


> I consider it worse, since it's too easy for people to become content with it.

It's not just that.

For a given vulnerability, there is an amount of time before the good guys discover it and fix it, and an amount of time before the bad guys discover it and exploit it. Obscurity makes both times longer.

In the case where the good guys discover the vulnerability first, there is no real difference. In theory it gives the good guys a little longer to devise a fix, but the time required to develop a patch is typically much shorter than the time required for someone else to discover the vulnerability, so this isn't buying you much of anything.

In the case where the bad guys discover the vulnerability first, it lengthens the time before the good guys discover it and gives the bad guys more time to exploit it. That is a serious drawback.

Where obscurity has the potential to redeem itself is where it makes the vulnerability sufficiently hard to discover that no one ever discovers it, which eliminates the window in which the bad guys have it and the good guys don't.

What this means is that obscurity is net-negative for systems that need to defend against strong attackers, i.e. anything in widespread use or protecting a valuable target, because attackers will find the vulnerability regardless and then have more time to exploit it.

In theory there is a point at which it may help to defend something that hardly anybody wants to attack, but then you quickly run into the other end of that range where you're so uninteresting that nobody bothers to attack you even if finding your vulnerabilities is relatively easy.

The range where obscurity isn't net-negative is sufficiently narrow that the general advice should be don't bother.


> obscurity is net-negative for systems that need to defend against strong attackers

If that's the case, why doesn't the NSA publish Suite A algorithms?


> If that's the case, why doesn't the NSA publish Suite A algorithms?

The math on whether you find the vulnerability before somebody else does is very different when you employ as many cryptographers as the NSA.

They also have concerns other than security vulnerabilities. It's not just that they don't want someone to break their ciphers, they also don't want others to use them. For example, some of their secret algorithms are probably very high performance, which encourages widespread use, which goes against their role in signals intelligence. Which was more of a concern when the Suite A / Suite B distinction was originally created, back when people were bucking use of SSL/TLS because it used too many cycles on their contemporary servers. That's basically dead now that modern servers have AES-NI and "encrypt all the things" is the new normal, but the decision was made before all that, and bureaucracies are slow to change.

None of which really generalizes to anyone who isn't the NSA.

A lot of the Suite A algorithms have also been used for decades to encrypt information which is still secret and for which adversaries still have copies of the ciphertext. Meanwhile AES is now approved for Top Secret information and most of everything is using that now. So publishing the old algorithms has little benefit, because increasingly less is being encrypted with them that could benefit from an improvement, but has potentially high cost because if anyone breaks it now they can decrypt decades of stored ciphertext. It's a bit of a catch 22 in that you want the algorithms you use going forward to be published so you find flaws early before you use them too much, while you would prefer what you used in the past to be secret because you can't do anything about it anymore, and the arrow of time inconveniently goes in the opposite direction. But in this case the algorithms were never published originally so the government has little incentive to publish them now. Especially because they weren't publicly vetted before being comprehensively deployed, making it more likely that there are undiscovered vulnerabilities in them.


If we assume for the moment that there are no ulterior motives:

Cryptography is the keeping of secrets. Obscurity is just another layer of a defense in depth strategy. Problems occur when security is expected to arise solely from obscurity.


Because they are the adversary.


I actually agree, sometimes.

But you got to do exersizes thinking out the worst cases (what an attacker could do if they didn't make any "unforced errors") in order to think about defending against them (ie, to think about security at all, which nearly every dev has to be).

Which is what the above was. We can not avoid thinking through the best case for the attacker, in public, if we are to increase our security chops. It's not "advice for the attacker".


For widespread and international software like this, it's actually a step backwards not to openly discuss how security can be potentially beaten.


This "advice" is even written on the report itself:

  The attack was loud; removing all developers caused everyone to get emailed.
  Given the credential taken, its likely a quieter attack would have provided a
  longer opportunity window.


There will always be bad actors of various levels of competency. Should the public only be aware of the simplistic ones, or should we make them aware of the worst-case scenario so they can be aware of the risks of poor security?

Serious question.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: