Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder how someone manages to figure this out...


Once they find the software, they can reverse-engineer it. Finding it the first time is the difficult part. That can happen based on a tip-off (e.g. every security team in the world will be checking their network for these processes, domain names etc. now), a lucky find (e.g. if the backdoor triggered weird crashes and someone took a look), or because it's linked to some other bad activity (e.g. someone finds some form of known malware and investigates how it got there).

In this case, it's likely that they somehow found the backdoor in SolarWinds' network and realized what was happening that way.


The vast majority of these finds come from honeypots where any unaccounted packet is already 100% an attack. Honeypots often run alongside legitimate systems and may have several different levels of firewall to sieve different vectors or assess the depth of the attack.


>Finding it the first time is the difficult part.

I frankly don't know how anyone finds any of these sort of burried attacks anymore. When software systems were simpler, it was already difficult to know enough about a system to detect or observe abnormal behavior and that was with simple systems with a fairly deep understanding of how things should be.

Anymore, so many systems are some tower of SaaS APIs mixed with commercial on prem software, then mixed with internal developed software spread across multiple teams, developers with high turnover rates, focus on functional software over specification/documentation for anyone to compare to, etc. that it seems like running over these issues are flukes discovered either by dedicated teams looking only for these issues (security auditing teams) or developers maintaining systems than happen to run across an abnormal behavior in the process of normal maintenance.


Right. If given the prompt "this system likely or definitely has a backdoor - find it", this won't be all that hard to find. Especially if you already have a PCAP that you suspect has some of the malicious traffic.

But it could go unnoticed and disregarded as completely normal, benign network traffic for years, or perhaps forever.


Thanks. This is voodoo magic to me. I understand the basic of computer programming but this is way more fascinating than the stuffs that I know and touch. I doubt this stuff is taught in schools, maybe the basic like network programming etc., but definitely this is highly sophiscated. Kudos to whoever designed the scheme and who found it!


Random aside: the Windows Error Reporting system (aka Dr Watson) was primarily a tool to help people write better code. Crash reports got sent to Microsoft, referenced against symbol files and aggregated into call stacks that crashed by frequency. Companies could sign up to get summaries of the reports and improve their software based on real world usage. At the time, this was a big deal.

Then someone realized it was also a good early warning system for new viruses, as many viruses would crash their host process in novel ways that were unlike the usual software-induced errors.

WER reports also could do other things. Sometimes bizarre, impossible crashes would happen. Microsoft would investigate some of these by showing a popup to the user inviting them to participate in analysis. If the user consented, they were put in contact with a Microsoft engineer. Turned out a lot of people were running unstable, overclocked hardware sold to them by vendors who had fraudulently misrepresented the hardware.

The telemetry that is out there is amazing, but not as amazing as the secrets it can reveal.


> Turned out a lot of people were running unstable, overclocked hardware sold to them by vendors who had fraudulently misrepresented the hardware.

The original devblog from 2005 is (https://devblogs.microsoft.com/oldnewthing/20050412-47/?p=35...). Aside: Upon pulling that up, I recognized the author as the one who wrote my favorite article about undefined behavior (https://devblogs.microsoft.com/oldnewthing/20140627-00/?p=63...).


The author is Raymond Chen, and the blog is probably the single most influential blog on windows internals. He has decades of amazing posts that are well worth a read.


And comments! Until Microsoft moves the blog URLs again and breaks every link and deletes every comment.


And many of them are condensed into a book he wrote. Same title as the blog.


That’s really interesting to read about. I always recall an early case in my career, where a customer’s storage device crashed, leaving a unikernel core file. They suffered data loss so it got a lot of engineering attention. This model was old even circa 2001 and ran a DEC Alpha processor. After a week of full-time investigation by our best engineer, the conclusion was that the processor...took the wrong branch. That was it, it just failed like a broken machine. Which I guess is what it was!


If you're interested in this stuff, "Countdown to Zero Day by Kim Zetter" is a fascinating read. It's both lightly written but, not light on technical details and provides a very detailed account.

[0]: https://www.goodreads.com/book/show/18465875-countdown-to-ze...


My CS program included a class that required us to exploit compiled code. Phrack Magazine and loads of other public resources probably have ideas like this for concealing data. Kids I knew in Junior high and high school were writing password stealers for Windows that would just iterate over every HWND (or whatever Windows 9x called handles) looking for inputs of type password, and concealing the app and the results.

It doesn't take a great deal of sophistication to come up with some of these things, just a bit of cleverness and exposure to the possibility of cleverness.


Your phrasing is very wise -- "cleverness and exposure to the possibility of cleverness." I'm going to ruminate on that one.


Thanks for the resources!


Odds are, if you're a programmer, that you'd have come up with a very similar scheme, given knowledge of the kinds of messages the software is expected to send or receive. I.e. leave the envelope plausible-looking and stash the payload in the random-seeming bits.


Although not a professional programmer, I do agree with what you said. But the whole scheme also includes execution on other fronts (e.g. how did they plant the payload).


You should try software cracking/game hacking. Would throw you into the deep end immediately


Remember that a group of Minecraft players managed to reverse-engineer the seed of a map based on a single low res screenshot using, among other things, the shape of clouds.

Human ingenuity is really impressive sometimes.


Exactly. Human ingenuity at scale can figure out wild things. I'm reminded of back when I played MMOs. No matter how hard the company tried to 'balance' the characters, it would only take a couple of days before players figured out optimal solutions that the company often didn't take into account.


This only works for casual untrained observers. A proper infosec analyst looking at this would first wonder why there is HTTP traffic related to .NET assemblies at all (this sounds weird). Then comparing two requests or so would likely show a suspicious pattern of all the hex changing randomly while the rest of the payload is cookie cutter the same.

Unless the outer layer is a legitimate service that's actually in use, this kind of thing only fools people who'd dismiss this traffic as "something I don't know about, but which is probably benign". Then there's the whole figuring out what IPs this is going to part, which would raise more alarm bells.

Cute, but I think you could do better. Hiding from someone looking at your traffic is very hard. The more important part is how well you hide from dumb automated tools that people rely on for initial detection.

Then again, a huge portion of the auditing/"infosec" market nowadays are untrained random people running automated scanners who actually have zero reverse engineering or proper security research experience, so I'm sure it'd work well against those.


> A proper infosec analyst looking at this would first wonder why there is HTTP traffic related to .NET assemblies at all (this sounds weird).

This isn't accurate. If you look at the Snort rules used to block it[1], it is masquerading as traffic to .solarwinds.com (ie, the vendor) to URLs looking like: swip/upd/SolarWinds.CortexPlugin.Components.xml

Unless you knew the software isn't supposed to do that, it isn't suspicious at all.

[1] https://github.com/fireeye/sunburst_countermeasures/blob/mai...


Hitting a vendor web server on plaintext HTTP? That right there is a massive red flag. If this were legitimate traffic, that'd be enough reason to drop the vendor right then and there.

If this were HTTPS then it wouldn't need the obfuscation to pass by undetected. And then there isn't much you could do at that point to find it via traffic analysis, assuming the uncompromised app makes similar HTTPS connections, other than perhaps going deeper into traffic pattern analysis if you're lucky.

Once the threat is identified some other way, it might be possible to develop blocking rules that work at the ciphertext layer (e.g. from packet size patterns exhibited only by the backdoor requests).


I wouldn't read too much into the missing "S" in the write-up.

It's unclear from that one rule but the majority of the Snort rules for this exploit stop HTTPS. Note the "port 443" in the rules.


And those rules are matching TLS ciphertext (record headers and SNI). The path rule is matching plaintext, so it's obviously not inspecting TLS.


I wish that were the case, but unfortunately some credit card processors still send some credit card processing payloads over http in some circumstances.


Is the actual payload encrypted, but just sent over HTTP? In that case, it’s bad, but not as bad as unencrypted over HTTP.


> This only works for casual untrained observers. A proper infosec analyst looking at this would first wonder why there is HTTP traffic related to .NET assemblies at all (this sounds weird). Then comparing two requests or so would likely show a suspicious pattern of all the hex changing randomly while the rest of the payload is cookie cutter the same.

A proper analyst working in a well-funded clean environment with carefully defined legitimate traffic patterns. That is not a majority of organizations and it would be very easy to miss something like this in the other 99% where they’re understaffed, dealing with the noise of routine malware, and what appears to be a poorly written vendor application doesn’t stand out as much when you have hundreds of them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: