The inherent security of the technique is actually quite strong. The tooling is terrible and many other problems exist with AAM, but in general the idea of having a shared “inbox” is good for anonymity. There is no way to tell which message is intended for whom. Receiving messages is unlinked, which is obviously good for anonymity. Sending requires a different set of technologies to ensure that the message delivery is unlinked. Tor solves part of this problem.
AAM had serious limitations. Things fall down a bit with the underlying technology for newsgroups and PGP and so on not being designed for anonymity, “fail closed” security, or ease of use (and difficulty of misuse).
A bespoke system could work, but the limiting factor is selecting an “inbox” that is widely distributed and heavily used (the anonymity is directly correlated to how many people access the inbox/inbox container.)
# Case Study: YardBird’s group (mostly) escapes arrest
A similar method for secrecy was used by a CSAM group. It was penetrated by the police when they arrested a member who turned informant to reduce his sentence. The police monitored the group from inside for months (I remember it being over a year). Despite having complete access to all the communications and technical surveillance data and international cooperation between police forces, the majority of the group evaded arrest.
There was a set of operating rules that the group followed and everyone who did so escaped the net. I wrote about it in 2013 if anyone is interested in digging deeper into the story.
> Even minutiae should have a place in our collection, for things of a seemingly trifling nature, when enjoined with others of a more serious cast, may lead to valuable conclusion.
And you can detect when you are being ptrace()d because a process cannot be ptrace()d twice. Unless they changed Linux again.
There are also timing issues that show up, and you can do any number of anti-debugging tricks which would reveal that the environment is being manipulated. Which is an instant red flag.
In general if the attacker is running at the same privilege level, you can probably evade it or at least detect it. I’m somewhat surprised there isn’t a basic forensics tool that automates all of these tests already.
“sus: [-h] [-v] [-V] [-o file] [-O format]
Sus tests for common indicators of compromise using on generic tests for common methods of implementing userland rootkits. It will check for LD_PRELOAD, ptrace(), inotify() and verify the system binaries match the upstream distribution hashsums. It can be used to dump the file system directly (warning, slow!) for comparison against the output of `find`. See EXAMPLES for more.”
Implementation is left as an exercise for the reader.
There's also TracerPid field in /proc/PID/status, which is non-zero when a process is being ptraced
> Sus tests for common indicators of compromise
There's a lot of stuff that Linux malware tends to do that almost no legitimate program does, this can be incorporated into the tool. Just off the top of my head, some botnet clients delete their executable after launch, in addition to being statically linked, which is an almost 100% guarantee that it's malware.
Check for deleted executables: ls -l /proc/*/task/*/exe 2>/dev/null | grep ' \(deleted\)$'
Although a malicious process can just mmap libc for giggles, and also theoretically libc can be named in a way that doesn't contain "libc". A more reliable method is parsing the ELF header in /proc/PID/exe to determine if there's an ELF interpreter defined.
You can also check for processes that trace themselves (TracerPid in status == process id), this is a common anti-debug tactic.
You can also hide the process by mounting a tmpfs on top of it's proc directory, tools like ps ignore empty proc directories due to the possibility that the process has terminated but it's proc directory is still around. This is obviously easily detectable by checking /proc/mounts or just listing empty directories with numeric names in /proc
Another heuristic can be checking /proc/PID/cmdline for two NUL bytes in a row, some malware tries to change it's process name and arguments by modifying the argv array, however they are unable to change the size of cmdline, hence having multiple NUL bytes is a viable detection mechanism. Legitimate programs do this too, but it's rather uncommon.
You can obviously combine these heuristics to make a decision whether the process is malicious, as by themselves they aren't very reliable
I am reasonably sure that the intended behaviour of Linux is that a process can only be ptraced by one other process level at a time.
However, a few years ago, I discovered that a process inside a container being ptraced could be ptraced by a second process running as root at the host level.[1][2] I don't know if that's been patched away since then, but my assumption at the time was that it meant that the "there can be only one" aspect of ptrace was more of an arbitrary decision, not a hard limit.
[2] I'm not sure if the "double ptrace" scenario made it into the final document, but it's the same techniques discussed in there, just attach a tracer to the containerized process from inside the container before you attach gdb or asminject.py from outside of the container.
It is not real, OP is suggesting that it be written. I was so very tempted to write it. The only downside to writing it is that once you have it in circulation, it moves the goal posts and rootkit authors with some skill will use less obvious techniques to hide their software.
The context of this post is somewhat important. It is a direct response to a post titled: Symbiote Deep-Dive: Analysis of a New, Nearly-Impossible-to-Detect Linux Threat
Userland rootkits are not “nearly-impossible-to-detect.” They are not novel, they are not impossible to detect, and they are not the pinnacle of hacker techniques.
I felt that it was worth pointing out that the history of userland rootkits goes back a ways and that they were very easy to detect because they rely on proxying all access to the system. If you bypass the hook they use to enter their proxy, they you evade them entirely.
Forensic and incident response guides used to advise using static linked binaries for exactly this reason. There are guides from the 1990s telling people to do this because userland rootkits were an issue (before kernel rootkits everyone used userland rootkits.)
Here is an example from 2013 which points out that you can’t trust any binaries/libraries on the potentially compromised machine and should use statically linked tools. [0]
LD_PRELOAD rootkits are not new and they are not nearly-impossible-to-detect to detect. My post listed a number of ways to detect them, all of which have been known for decades.
It’s worth noting that this has a measurable and enormous impact on system performance, because they usually are adding a bunch of strcmp or similar to every incantation of a bunch of different libc calls.
Bringing your own static linked busybox will still evade that rootkit.
If the attacker has modified the environment to present a specific view of system state, bringing your own environment defeats it.
There are tricks which are better than modifying things to hide. For example, there is a race condition between opendir() and readdir() which you can win by using inotify(). Then you can unlink() whatever, wait a while, then link() it back in. During that time it will be deleted and thus invisible to any detection. (I saw a demo of this 12 years ago, so I might be misremembering a bit. I know it used inotify() and unlink())
like a sibling comment mentioned, process injections can also happen. but besides that, if your busybox wasn't already on the system then what's the value of bringing it when you suspect a rootkit? userland or not, a memory acquisition for the system for off-box analysis (volatility) would be ideal and most reliable in my opinion.
Good to see that the technique is still viable after two decades.
On a related note, this sort of issue (difficulty researching the origins of techniques, and hacking history in general) is a problem that will only get worse. As a community we haven’t created an institutional memory beyond “the oldest hacker you know.”
> we haven’t created an institutional memory beyond “the oldest hacker you know.”
Which I'd wager is due to over-reliance on search engines. The net is stuffed to the brim with useless bullshit designed to steal eyeballs, so finding anything somebody published two decades ago is now impossible. Internet Archive is useful if you already know what website used to exist, not so useful if you don't.
Whatever happened to that website that was a combination of blog + archive of exploit POCs? Wasn't it called PacketStorm? I just tried to find it with two search engines and came up empty. That would've been an ideal place to track down old techniques and news.
> Good to see that the technique is still viable after two decades.
It absolutely blew my mind to learn that Debian is still shipping with Yama mitigations disabled by default (last time I checked, which was about a year ago). I think they're one of the only mainstream distros to be doing this, although I haven't done a comprehensive survey.
I think this is so users can choose what level of restriction they want using kernel.yama.ptrace_scope with sysctl, 0 being the default and 2 being the most restrictive.
Hah! I was just referencing this paper the other day when mucking with the linker for an unrelated reason. I probably should have chosen a better name for my article - I am trying to cover the cases of "you have Linux command execution, how do you run native code?" as opposed to your approach which as I understand is more: "you are running native code, how can you load a separate ELF in-process?"
Agreed about institutional memory; zines/blogs are very important; but at the end of the day I usually end up just asking in some corner of IRC.
> "you have Linux command execution, how do you run native code?"
I was going to ask you what the precise situation is in which you'd apply the ideas from the blog post as I don't know what exactly is meant by "process injection". I think the article would benefit from providing a little bit more background for us non-hackers / non-pentesters. Still, very interesting article – thank you!
PS: The article says
> you need a writable location on disk; this is not always true in e.g. read-only chroots, filesystems, containers, etc
Couldn't you create a temporary file in-memory (e.g. in /dev/shm or in some tmpfs), make it executable (+x) and then execute it?
Apologies it's a little scattered. Roughly it's about dealing with situations where you can execute a command but now want to run a native executable, and how much noise such a thing will make in the presence of monitoring.
> Couldn't you create a temporary file in-memory (e.g. in /dev/shm or in some tmpfs), make it executable (+x) and then execute it?
It all depends on how your environment is set up: whether a tmpfs or shm device is mounted and writable by your user is up to the admin. For example, on many embedded devices you often want to avoid writes to prevent any sort of filesystem wear, or because you have a write-once media like a ROM; so the whole fs will be mounted readonly. With chroots it's best practice to provide a minimal environment - unless tempfiles are needed there will usually not be a /tmp. Try `docker run --read-only -ti ubuntu bash` as another example:
```
root@9302f159e0e0:/tmp# touch a
touch: cannot touch 'a': Read-only file system
```
My apologies for the delay, Joe, I was on vacation. Now that I'm home, I gave this a try but at least on my machine writing to /dev/shm/ works as I remembered, even with --read-only:
$ docker run --read-only -ti ubuntu bash
root@3dfdab770505:/# echo "bar" > /dev/shm/foo
root@3dfdab770505:/# cat /dev/shm/foo
bar
So, again, couldn't you just write your binary to /dev/shm and execute it?
Ah, so, in 2005 I wrote about that when I implemented rexec() — remote exec() — which takes a binary and then copies it over an arbitrary text only link (like ssh) and executes it completely in memory without touching disk.
The idea was that if you have access to a box via a shell and you want to run your own binary without leaving evidence behind, you’d use rexec() to do that.
That is a really useful implementation and a good way to use gdb to "live off the land". It is interesting how ops changes like moving to containers/VMs affect pentesting techniques; over the last decade I find myself relying less and less on live-off-the-land in a lot of engagements. When you can use them though they have a lot of advantages.
Also never realized that others have implemented (and I guess patented?) syscall proxying, I have heard that idea discussed before for offensive tooling and wondered how well it works in practice.
Syscall proxying was very old even when I wrote that article. The problem with syscall proxying is that it is slow. Take any process and imagine adding network latency to every single syscall. On a local network is incredibly slow, but over any sort of real distance it is just impossible.
That’s why I pushed everything to the target system. Run it local as much as possible.
Back then there were no containers or VMs to use. These days I think you should be bringing your environment with you. Unless there are serious reasons not to.
That makes sense, something like `grep $USER /etc/passwd` would shuffle the whole passwd file over the wire, etc; for a lot of post-exploitation stuff I could see it causing more trouble than it's worth.
Oh, it’s far far worse than that. Just the core operation would be:
open() — network round trip
fstat() — network round trip
brk() — network round trip
read() — network round trip
Shuffle data over network
read() — network round trip
Shuffle data over network
Etc etc
For a “grep root /etc/passwd” there are 88 syscalls on a Debian 11. If we assume a very generous 50ms latency for every syscall, that means we’re waiting 4.4s for the result.
The use case for syscall proxying is limited to when you don’t want to upload an exploit onto a target machine, but you need to run the exploit on that machine. So it could be an LPE or something.
Userland exec was a very interesting read when I came across it some years ago; thanks for publishing it!
The technique still mostly works, but on recent glibc+Linux, you also have to unregister the rseq area before cleaning out the address space (which requires computing the address first, which is a little cumbersome). Otherwise, if the rseq area is registered but unmapped, the kernel will forcefully stop the program.
(That said, nowadays memfd_create + fexecve is likely a more robust alternative in many cases.)
I didn't vote, but to answer your question, you couldn't possibly have done the "exact same thing" as a linux-specific userland exec technique and have it work as a DLL injection technique on Windows. I'm sure it could've been conceptually similar, but not identical, and maybe those differences would be interesting to discuss.
"Exact same thing" as in re-implemented a basic OS feature in user mode to bypass an intercepted feature. Obviously it can't be literally exactly the same thing, because that wouldn't be implementing something, but rather using existing source code.
I didn't vote either way until you edited your post to add the meta comment. I don't like posters discussing moderation of their posts (I find that discussion about themselves to be vain and distracting from the topic), and I feel gross and off topic myself talking about it even now.
I didn't comment about the voting itself, I asked what was wrong with my comment. There's a difference between "oh, I'm getting down-voted, this sucks" and "what's wrong with what I said?" The former is whining, while the latter invites discussion. If telling someone else what you think about their comment counts as "meta" discussion and is therefore boring then that kind of kills the purpose of a discussion forum.
Maybe one day HN will realise that giving the ability to downvote is pretty pointless and needlessly open to abuse.
That negative ability just promotes negativity itself and a bloody pia when I am reading a greyed out message because I am too stubborn to go into the settings and change defaults - so I just steam on thinking HN are using a relic of an ideology.
If you're part of a cult, and you're not allowed to eject someone from the cult, you can at least ignore them or walk away from them when they talk to you, if you don't agree with what they say/do. It's a way for the members of the cult to reinforce the ingroup's behaviors without having overt power over the group. For other people watching this behavior, it reinforces the need to align themselves with the larger group, to avoid the same fate.
The first rule of fight club is you don't talk about fight club.
You're just describing the behavior of every society ever.
I'm not sure where some folks got the idea that they could say whatever they wanted in whatever community they wanted and not expect any reactions, but that's clearly an inhuman expectation. I'm guessing that attitude is a byproduct of internet commenting.
I guess you could call society an "ingroup" or a "cult" like you're doing. It technically fits the definition. It also gives you a convenient victim complex to wave about whenever you feel like people aren't agreeing with you enough. But let's be honest: not every social pariah is a Galileo.
Not really. Society is a broad concept, which contains many different concepts, two of which are ingroup and outgroup. It's a complex topic, but suffice to say that within the concept of ingroup, there are some common behaviors specific to certain kinds of groups, but not all of them. Much like with EQ, if you learn about the tendencies of humans in social groups, you can learn to manage your own behavior to resist falling into common traps. But those who don't learn about them are often subject to them, with unfortunate effects.
As to your assertion ("idea that they could say whatever they wanted in whatever community they wanted and not expect any reactions"), I don't think I or anyone else said that or suggested it. Sounds like a sweeping generalization designed to reinforce an opinion you want to believe. But it also seems like you drank the kool-aid a while back, so I'm pretty sure I can't influence your opinion.
You're just repeating what I said with different words. Yes, throughout history, people who bucked the rules and practices of society might have been called "out group members", but that's just a label. The substance is that they're eschewing society, and obviously should expect society to react and shun them. Confront them, maybe. Some societies used banishment.
Either way, the label is being self-applied here to indulge a victim complex in someone who doesn't like that society is reacting to their decision to go against society. Consider that total jerks and crazy folks use exactly the same logic.
Not that anyone here is a jerk, or crazy, just that the "I'm an independent thinker and everyone else is drinking the kool aid" argument is tired and oblivious and self-indulgent and applies equally when used by jerks and nuts. The dude on the street corner shouting racial slurs at the top of their lungs and swatting at invisible spiders thinks he's "just part of the out group", too.
>I'm not sure where some folks got the idea that they could say whatever they wanted in whatever community they wanted and not expect any reactions
Usually, historically, if you said something someone didn't like you could expect the person to either confront you directly, or to silently change their opinion of you and possibly change their behavior, perhaps to your detriment. Even before the Internet existed, there was something else someone might have done, which is to send you an anonymous note reading "fuck you". No argument, no way to reply, nothing but a cheap gratuitous offense that serves no purpose other than upset the person who receives it. I argue that downvoting a comment you don't like is equivalent to that.
> Even before the Internet existed, there was something else someone might have done, which is to send you an anonymous note reading "fuck you". No argument, no way to reply, nothing but a cheap gratuitous offense that serves no purpose other than upset the person who receives it. I argue that downvoting a comment you don't like is equivalent to that.
A better equivalent would be arguing from an anonymous throwaway account (*cough*), where any replies go into the void, rather than to an established community member.
A comment either adds value in people's opinion, or it detracts, or neither. The society that is HN has a voting system to communicate that info, because that is what we prefer.
If people find the non-meta substance of a post interesting enough to reply, they absolutely will do so. But we have the guidelines recommending against moderation meta-discussion because you and the moderation of your comment aren't the topic, and you shouldn't try to make it the topic, because we find that to be more boring than the actual topic. Maybe you find yourself more interesting, or you think conversation is boring unless it includes moderation meta-discussion. In most cases, I don't think others agree (hence the guidelines and the voting).
You can call our society an "in-group", you can call someone who ignores what society wants an "out group member", but in the end, I don't know why it surprises people that society responds with feedback (though perhaps not in the exact feedback format one may personally prefer).
>A better equivalent would be arguing from an anonymous throwaway account (cough), where any replies go into the void, rather than to an established community member.
I don't agree with this. How can you say that your reply "goes into the void", just because the name of the account you're responding to is "throwaway892238" rather than "fluoridation"? In either case there's a person on the other end, how does the screen name they go by change anything? I could equally say that unless you tell me your real name, or unless you let me put my hand down your pants, or any other equally arbitrary standard, I don't consider you a real person. Neither the screen name, nor their real name, nor what's in their pants, is what the actual person is.
> How can you say that your reply "goes into the void", just because the name of the account you're responding to is "throwaway892238" rather than "fluoridation"?
I have no idea which of our community members any given throwaway is, I'll never know, their comments are just anonymous notes left with no way to respond to an actual member of our community with an actual history and an actual reputation (vs a throwaway they may never check again). That's why a throwaway comment is like an anonymous "fuck you" note. Or, put another way:
> Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to.[0]
My point is further supported by the rest of my post that you responded to, which is the substance of our discussion. Continuing to argue over an analogy instead of the substance of the matter would be silly.
Yeah. This doesn't appear to be the same sort of thing (malicious code being integrated into the kernel itself). This is just a backdoor that leverages an exploit.
I have the full story on that incident. It is actually really funny.
If the guy who did it wants to come forward, that is his decision. [edit: I won't name names.]
He did provided me the full story. He told me with the understanding that the story would go public, so I will dig it up and post it.
I also interviewed the sysadmins who were running the box at the time.
1. it was not an NSA operation, it was done by a hacker.
2. it was discovered by accident, not because of clever due diligence.
Basically, there was a developer who had a flakey connection and one time his commits didn't go through. To detect this in future he had a script that would download the entire tree from the server and compare it against his local copy to make sure that his changes had been committed.
It was discovered because of the discrepancy between his local working copy and the upstream copy. Which was checked not for security reasons, but because sometimes the two were out of sync. That's all. Just dumb luck.
The sysadmins are still quite bitter about it. I know how it feels when your box is hacked and you really take it personally.
The code wasn't added by hacking the CVS, as far as I remember, but rather through a hacked developer with commit rights.
Geez, this crowd. The clearest evidence that it was not an NSA attack is that it was not very good. It modified a CVS mirror. At no time was the source of truth (the bitkeeper repo) in any danger. Anybody that knew how this stuff worked at the time would have known it would be caught immediately. Not very state level expertise, pretty sad if it was the NSA.
> The clearest evidence that it was not an NSA attack is that it was not very good.
I suspect you are being sarcastic, but in case you aren't, you may want to reexamine your assumptions.
The colossal incompetence that is synonymous with government work doesn't magically stop at three-letter agencies. The FBI/CIA communication fuckups before 9/11 are just one famous example.
The idea that the NSA is staffed with "uber hackers" is a Hollywood fantasy. A government job working as a hacker is still a government job. Why would someone with that skillset, who can get a job at FAANG for 10x the salary, submit to the bureaucracy and monitoring BS that comes with working for an intelligence agency? I'm sure there are a select few who find this appealing, but the vast majority are just going the take the money and the free life.
Those two were my wake up calls. The US absolutely is in the hacking business, but they are not in the getting caught business. Everything we have seen so far is incredibly sophisticated and took years to discover. How can you then go out and claim that the NSA isn’t incredibly competent?
See also all of the intelligence the US has provided about the Russian invasion into Ukraine. The US is really good at spy craft.
maybe 10x the salary (probably not) but also a correlated increase in hours instead of a contractually mandated maximum of 40 hours, combined with the legal inability to do work from home, discuss work at home, and a lot of related perks.
Also "perks" like having your life put under the microscope at regular intervals, going to prison if you talk about what you do, etc.
And I strongly doubt that agencies that are known to routinely violate the law, the constitution, and human rights care about "contractually mandated" 40-hour workweeks.
> And I strongly doubt that agencies that are known to routinely violate the law, the constitution, and human rights care about "contractually mandated" 40-hour workweeks.
lol they 100% do.. because they're all contractors bidding for the work.. so if one company bid X man hours for a cost plus contract and won as the lowest bidder and then put in 3x the time, either the winner would sue for being underpaid or, if they were paid, the loserss would sue because of impropriety.
FAANG money is a relatively recent thing. Stock options used to be the only way you might make millions as a developer, at that was always a gamble. The NSA probably has a lot of seasoned developers who started their careers when the pay gap was much smaller.
> I'm sure there are a select few who find this appealing
That’s really all you need dude. And yet both private and public sector intelligence jobs are selective. Supply and demand might help you reconcile your other points.
You slightly underestimate the pool of extremely patriotic or nationalistic smart engineers and scientists around.
If your basic thesis was correct no video games would get made either. Most of them could go get that FAANG money for arguably better work life balance. People have more motivations than you realize. And the idea that all the smartest engineers and scientists exclusively work for FAANG is a contrivance only believed on this dumb site. (The equally idiotic corollary is that all the smartest people work in software).
I also think you are underestimating the lifetime earning potential of top intelligence workers. 9 to 5 government jobs don’t have to be forever.
Finally, the sophistication of state level attacks such as in Iran is clear. The evidence exists, and you are wrong.
And you’re missing the point, it isn’t even that this attack wasn’t sophisticated it was that clearly no one sat down for even a few minutes to discuss how it would be detected. An organization, even a private hacking group, would have discussed this.
Not to mention it being extremely difficult to travel internationally, and not being able to have close personal friendships with many people who live in other countries. Not being able to partake in THC consumption EVER, much less any other recreational substance besides alcohol. The list goes on.
I understand that it pays very well and there's decent work/life balance in terms of hours. But you have to essentially work in a windowless cell with no internet. And for lots of people with the curious hacker mentality, it would be a chore to "keep your nose clean" as they say.
I live in the DC area and the stereotype of the bland, khaki, polo, and white sneakers wearing boring person is true.
This thread is already full of silly archetypes and over generalizations not borne out by the reality. With that in mind: When you say drug using, “curious hacker mentality” all I can think of is Eric Raymond and the implication that this wizard of fetchmail is just too smart to work with the boring likes of von Neumann, Turing and Shannon, Tao, etc.
> Not being able to partake in THC consumption EVER
The all caps tickles me. I don’t think this is a huge sacrifice outside some limited circles. Some of the smartest people are ethical vegetarians.
I replied to their comment because it was related, but where do you get the impression it is not supportive?
I’m responding to the idiots poking holes in that claim.
Wait was the guy you know the hacker or someone who discovered the hack by accident? If the latter, how do you know anything about the hacker's identity or motive?
Sounds like OP interviewed the person who uploaded the code, whose system was previously inflitrated (it can still be the NSA). So why say "If the guy who did it wants to come forward, that is his decision. But he did provide me the full story", it doesn't sound like OP interviewed the "guy who did it"...
I read that the other way. "If the guy who did it wants to come forward, that is his decision. But he [still talking about the guy who did it] did provide me the full story."
That is, the perpetrator gave him the full story, but he won't name names, because it's the perpetrator's choice whether or not to reveal his identity.
he was more specific, but I (a) don't remember the name off the top of my head, and (b) don't think it is beneficial to put them on blast. It isn't their fault they got hacked 20 years ago.
To be clear: you're telling us the full story of the discovery, not the full story of the exploit? You and your source don't know who the attacker was, right?
What is there to say about the hack? Like everything back then it was probably accomplished by exploiting trust relationships. I can ask him, but it is not at interesting 20 years later.
What is there to say about the [discovery]? Like everything back then it was probably accomplished by [a simple source code diff]...it is not at interesting 20 years later.
You get the idea. The story you know might be interesting to you because you happen to know the person involved. And it is sort of interesting? But not really as interesting as the _full_ story would be. In particular because your grammar in your original comment kind of implies you knew the actual attacker.
This all seems fairly obvious to me? Is there anything we're missing about the discovery? It's pretty mundane that one of hundreds of devs working on that source code happened to have a vanilla copy, especially in 2003 with a less reliable and slower internet.
A state actor would have done a much better job. This was detected nearly immediately and anyone that knew how the system was setup (which was public knowledge) would have known this would be caught. The state level hackers are not that dumb.
If there was a serious backdoor attempt, then this was the distractor.
And seriously back in those days especially Linux didn’t need much help with getting root exploits in the tree.
> Like everything back then it was probably accomplished by exploiting trust relationships
That's wrong on many levels. Bold and stupid "hacks" committed by teenagers using SE tend to get a lot of traction, because it is both bold and stupid. This hasn't changed. But "back then" there was much more than that...
“Deanonymising alt anonymous messages”
https://www.youtube.com/watch?v=l5JBMyxvuH8
The accompanying blog post is here:
https://ritter.vg/blog-deanonymizing_amm.html
The inherent security of the technique is actually quite strong. The tooling is terrible and many other problems exist with AAM, but in general the idea of having a shared “inbox” is good for anonymity. There is no way to tell which message is intended for whom. Receiving messages is unlinked, which is obviously good for anonymity. Sending requires a different set of technologies to ensure that the message delivery is unlinked. Tor solves part of this problem.
AAM had serious limitations. Things fall down a bit with the underlying technology for newsgroups and PGP and so on not being designed for anonymity, “fail closed” security, or ease of use (and difficulty of misuse).
A bespoke system could work, but the limiting factor is selecting an “inbox” that is widely distributed and heavily used (the anonymity is directly correlated to how many people access the inbox/inbox container.)
# Case Study: YardBird’s group (mostly) escapes arrest
A similar method for secrecy was used by a CSAM group. It was penetrated by the police when they arrested a member who turned informant to reduce his sentence. The police monitored the group from inside for months (I remember it being over a year). Despite having complete access to all the communications and technical surveillance data and international cooperation between police forces, the majority of the group evaded arrest.
There was a set of operating rules that the group followed and everyone who did so escaped the net. I wrote about it in 2013 if anyone is interested in digging deeper into the story.
https://grugq.github.io/blog/2013/12/01/yardbirds-effective-...