TRIM has been available in FreeBSD for mounted ZFS filesystems for quite a while, since ~Sep 2012 in 10-CURRENT[1]. I believe this is just a user space command for manually TRIM'ing in cases where it doesn't have explicit filesystem support.
Out of curiosity, does this mean that they’re using (or have used) FreeBSD in their infrastructure or for some other purpose, or just a high level of Unix competence in that team? Any interesting details that you’re able to share?
For nginx, versions 1.16.1+ and 1.17.3+ from upstream fix at least three of the vulnerabilities (CVE-2019-9511, CVE-2019-9513, CVE-2019-9516), however if you use a version provided by a distribution’s repositories (e.g., nginx 1.16 included with Ubuntu 18.04 LTS) you’ll need to watch for those security advisories and fixes separately which may have different version numbers due to the backporting.
Of course the people around these parts tend to have very particular needs and use cases, but for anything resembling the "common case" the performance impact of not using sendfile should be negligible.
(I'll just point of that using sendfile means that traffic is unencrypted... which is probably fine on an internal network, but I've started adopting the stance that even internal network traffic should be encrypted unless there's a very good reason not to do that. An absolute requirement for performance might be a good reason.)
I was going to mention kernel TLS hopefully enabling sendfile for mostly-HTTPS workloads, as that’s the direction everything is heading anyway, and without it we don’t get zero-copy for those connections.
Now I’m more curious about the actual threshold where not having sendfile begins causing noticeable performance problems… at what point before you become Netflix?
If your cache can face-tank a HTTP-DDoS, you don't need fragile fingerprinting techniques to distinguish bad from good, thus reducing the user impact (less accidentally-blocked users). The less cost you have for filling that 100 Gbit NIC with your TLS cache traffic, the more boxes you can afford. Internet exchanges are surprisingly cheap to connect to.
Of course sharing resources between a couple services would be good, as NICs and switch ports are sill a way from free.
That comment was written in 2007 when the original suggestion would have been somewhat more acceptable. In 2019, please do not use plaintext FTP for anything at all if you can help it, especially for a setup involving personal documents or other data you care about keeping private. Every single syncing solution worth anything today, open or proprietary – including Dropbox, Google Drive, iCloud Drive, Microsoft OneDrive, Syncthing, et al. – uses TLS or another strong form of transport encryption, and that should also be the absolute minimum bar for anything self-hosted, too.
There are many other better ways of building a Dropbox-like system on Linux these days than that advice, including the aforementioned Syncthing[1], but the appropriate update to that comment alone would be "getting a SSH account, mounting it locally with sshfs[2], and then using Git on the mounted filesystem".
Most Linux images deployed on cloud providers I’ve seen don’t even have swap by default… it’s up to the user to add swap if necessary. I haven’t seen any installers default to encrypted swap, however.
Keep in mind that this article is specifically discussing defaults, though, not necessarily the overall potential for security hardening. There are certainly some security-related features FreeBSD is missing when compared to other BSDs (OpenBSD) or Linux distributions, but some of what is called out can absolutely be accomplished by system administrators after installation, or as part of image deployment… but it would be better if the defaults evolved to be more secure without extra configuration.
As general purpose operating systems go, there was another interesting article from earlier this year comparing popular Linux distros which found that Ubuntu (18.04) had the best overall posture with regard to use of hardening and mitigation mechanisms out-of-the-box vs. versions of CentOS/RHEL, Debian, and OpenSUSE at the time. Some of this was due to the newer Linux kernel version being used, but also thanks to hardening of binaries, etc.
> Our experiments indicate that Ubuntu 18.04 shows the largest adoption of OS and application-level mitigations, followed by Debian 9.
I think it’s probably because for server usage, RHEL/CentOS is used significantly more than Fedora (with its shorter supported lifecycle), and Fedora is essentially the upstream for shakeout testing prior to inclusion in RHEL/CentOS, so hardening and security technologies – e.g. SELinux, fstack-protector, etc. – are very close. RHEL/CentOS 7 was based largely on Fedora 19, and newly-released RHEL 8 is based largely on Fedora 28.
No they don’t. Certain types of clearances for DoD or the Intelligence Community may require polygraph along with the background investigation process, but the vast majority of cleared US government or contractor personnel do not have to undergo polygraph testing – they only go through routine reinvestigations at the 5- or 10-year marks following favorable adjudication for their initial clearance.
Ancestor post said "lie detector". A polygraph test would be a separate thing entirely. I'm not entirely certain why you thought to link them.
The lie detector in the regular process is when the federal employee investigates whether the answers on your questionnaire match up with public records and in-person interviews.
The polygraph is security theater intended to intimidate the subject and possibly reveal previously undisclosed issues by provoking a stress response. That's why they keep using it.
That’s true, although the requirements for different clearance levels are drastically different, and most people don’t refer to the standard background (re-)investigation process as a “lie detector,” even if the investigators are in fact attempting to determine your honesty in addition to evaluating other signals about your behavior and potential ability to be influenced or manipulated.
Most of the time, comments about “lie detectors” are a reference to polygraph tests, which only apply to an extremely minor percentage of the overall cleared workforce; I just wanted to point that out, that it’s not quite as bleak as implied by the parent.
I was tongue-in-cheek trying to break the implied association between polygraph and "lie detector".
The lie detector in a polygraph test is always the human running it, and they're about as fallible and unreliable as anybody else, with respect to determining honesty. They could just chuck the machines in the trash and call it a "veracity interrogation", but selling the machines and training the people to use them is a better money sink, and gives more ass-cover when someone invariably deceives the investigators. "Trained to beat the machine" sounds better on paper than "really good liar".
Security theater needs its props.
As far as I know, only those working in secure compartmentalized facilities and with high-value assets ever get polygraphs.
If you hold a key and wait by a coded terminal in a nuclear missile silo, you get one. If you reduce and analyze anti-ballistic missile test telemetry, you don't. If you write systems code for submarines, you might get one. If you write route-planning software for in-flight refueling tankers, you don't. My guess is that it ultimately depends on how much Country X would probably pay you to borrow or copy your access. If it's above $Y, they do a little more to scare you into being a good little guardian of the nation, and hope you're not another Snowden.
They just have way too much need for cleared personnel to spend enough to actually make certain, for everybody. Doing it correctly always costs more, in time and money. Why do it right when you can make it look like you did it right, and get paid the same?
Pretty sure it's still UFS+J. They've mentioned in the past that the benefits of ZFS don't really apply to the CDN boxes: if a drive dies, it's simply removed from the local cache, and files are re-downloaded/redistributed as necessary across the remaining cache drives the next time the appliance refreshes from its upstream origin. Additionally, ZFS prefetching was actually less performant in some cases from what I remember -- or at least providing no benefit -- because of how it couldn't really anticipate the I/O patterns of video segment requests from users.
The reason Netflix would potentially wait for a proximate AWS datacenter is because all of their apps, backend services, and interface UIs are served from EC2 instances; all of the actual content delivery is in fact handled by their FreeBSD-based OpenConnect appliances. In other words, no, Netflix doesn't put their content on other, third-party CDNs like Cloudfront, Fastly, Limelight Networks, etc., but they do absolutely serve it all from their own, custom-built CDN/hardware.
[1]: https://www.freebsd.org/doc/en_US.ISO8859-1/books/faq/all-ab...