FreeBSD 4.x was considered by many as the most stable FreeBSD ever.
Mainly because there were 11 point releases of fixes and enhancements (4.11), as opposed to today where FreeBSD major version only have 3-4 point releases.
It was not only the most stable, but for many applications, especially in networking, it was also the fastest single-threaded operating system, beating easily Linux and Windows XP or 2000.
Unfortunately for FreeBSD, the launch of the Intel Pentium 4 with hyper-threading in 2003, then of the AMD dual-core CPUs in 2005 have made quickly FreeBSD 4 completely obsolete.
The smaller FreeBSD team has required many years until achieving a decent implementation for multi-threaded CPUs and during that time they have remained long behind Linux and other operating systems.
Besides perfect stability (it was normal to not reboot FreeBSD 4 for years) and great networking performance, it also had a much more reliable file system than the competition.
Despite the fact that Windows XP used NTFS and Linux had at least 3 file systems with journaling at that time, where journaling was supposed to make the file systems crash-resistant, I have seen at that time (around 1999-2003) many cases of file system corruptions after power outages, on computers without UPSes which used NTFS or Linux file systems with journaling (on Linux EXT2 without journaling any power outage was very likely to require a complete reinstallation).
During the same power outages, the computers with FreeBSD that used the UFS file system with "soft updates" never experienced any file system corruption, despite the fact that UFS with "soft updates" was not a journaling file system, but only one where the disk write operations were carefully ordered in such a way as to prevent unrecoverable file system corruption in the case of a crash.
> required many years until achieving a decent implementation for multi-threaded CPUs and during that time they have remained long behind Linux and other operating systems
Many OSes at the time had hitches with SMP. BSD was one of them. FreeBSD had SMP in 4.x but almost everything in the kernel was single-threaded and the kernel thread was a major bottleneck.
FreeBSD wasn't alone in this. Linux suffered from a similar problem at the time, also because of the driver architecture. (The infamous "big kernel lock" wasn't fully eliminated until 2011: https://kernelnewbies.org/BigKernelLock)
This is an area where NT was much better, or VMS, or Solaris. And yes, the SMP issue, in hindsight, does partly explain why both Linux and BSD weren't as historically attractive as they otherwise looked, for large systems.
I somehow have a hard time believing that you can feel the performance of the tcp stack while browsing of all things in a VM, are you sure it can be attributed to that?
Yeah, slow DNS can look like a slow connection, but there are some subtle differences. images/elements will pop differently than if it's just a slow connection.
Had a wicked issue where a firewall was inspecting DNS requests and causing them to take almost 10 seconds to complete. Was hell on the SIP phones, and browsing the Internet was... Interesting.
No, not really. I have many old Linux VMs too, with different degree of rot, and FreeBSD has noticeable smaller tcp connection establishment latency. Youtube opens way faster than on both older and newer linux vms.
The issue is that you're performing a test which involves a whole bunch of interconnected layers and then assigning one part of that interconnected system as where the benefits lie.
Why not the TLS implementation? The video drivers and their kernel interfaces? The OS' process scheduling? A dozen other things that might be responsible for the perceived performance difference to one degree or another?
It feels like gamers who blame 90% of multiplayer issues on "the netcode".
It'd be different with data looking specifically at TCP connection establishment timing without a bunch of other stuff involved.
I've 100% had issues with UFS, it's a simple filesystem and subsceptible to corruption. ZFS meanwhile is a different beast altogether which is designed to ensure data integrity at all costs.
UFS without soft updates is easily susceptible to corruption.
The "soft updates" option must be chosen when the disk is formatted for good reliability.
As I have said, 20 years ago "soft updates" in UFS worked better than journaling in the other contemporaneous file systems.
Nowadays it is likely that this is no longer true. I am still using FreeBSD in servers, but unlike 20 years ago I can afford UPSes so I no longer see often crashes due to power outages, even if I had one incident some time ago with a battery that had not been replaced yet after the UPS had warned that this is necessary, and when a power outage happened, the UPS worked for less than a minute and the power was cut before system shutdown. Even in this case there was no file system corruption on UFS.
Even with this good recent experience, today I would no longer trust UFS like 20 years ago, because it is said the current FreeBSD maintainers no longer understand the convoluted code that implements "soft updates" in UFS, and in any case most of their file system maintenance and development work is directed at ZFS now.
Although, according to the link, not because there's anything wrong with it, but because other changes they want to make to the filesystem code would risk breaking soft updates.
Softdep is a significant impediment to progressing in the vfs layer
so we plan to get it out of the way. It is too clever for us to
continue maintaining as it is.
UFS on FreeBSD is still maintained bz Kirk McKusick, the inventor of soft updates. It's the other BSDs that have trouble as they can't rely on Kirk's expertise.
ZFS had its own data corruption bug some months ago.
TBH, this was a big minus. You really don't expect such things from a filesystem which is in production for years and whos aim is data integrity.
Now, to its defence, it is only used by FreeBSD and some nishe linux distributions, so it does not get much testing.
20 years ago is 2004, not the 90s. I don't know, but it's certainly possible that one of them was in the lead in 1995, a different one in 2005, and a different one in 2015. This is especially plausible because Linux was only created in the mid 90s, so those first few years seem likely to have been disproportionately rough.
NTFS might have been rock reliable in Windows NT and perhaps also in Windows 2000.
When Windows XP has been launched, NTFS was certainly much less reliable. Even without being affected by any crashes or other anomalies, the free space on the NTFS partitions of early Windows XP computers would shrink steadily, without any apparent cause, requiring a reformatting/reinstallation after some time.
Early Windows XP was very buggy. While a computer with Windows XP did not require one or more reboots per day like one with Windows 98, failing to reboot it for more than a few days guaranteed a crash.
Only after installing several massive service packs in the following years, Windows XP has become reasonably stable.
I live in a developing country. We have constant power blackouts, like every day. Anything not journalling cannot survive that yteatment. Even crappiest XP was better than UFS which absolutely sucked in that respect.
For me, in the 90s ext2 was much, much reliable than ntfs. After a power failure ext2 will just run fsck and fix the filesystem while ntfs will sometimes give up.
I don't know if it was a file system related thing but you could bet on the Windows registry being borked beyond repair after just a handful of unexpected power cycles.
No. We had a NT4 server, handful FreeBSDs and one Linux. After several blackouts FreeBSD would lose file or two. NT4 workstation had no problems either, neither with registry nor FS.
Windows 2000 was stable as a rock, NTFS included. Never had an issue with it.
Windows XP was more stable than 98SE, but it did crash more, because it was a general purpose OS used by people playing games. Still can't recall any major NTFS issues with it.
Windows 2000 with XP drivers (edit the INI file to allow it to install) was the way to go ;)
> Unfortunately for FreeBSD, the launch of the Intel Pentium 4 with hyper-threading in 2003, then of the AMD dual-core CPUs in 2005 have made quickly FreeBSD 4 completely obsolete.
FreeBSD 4 wasn't too bad on dual cpu systems, as I recall. Fine grained locking gets more important as cpu counts grow, of course, but at 2, the single Giant lock isn't unreasonable.
Now i get it. I mixed it up with "windows type single threaded" where, even if you had a multiple processor machine, windows will run only on one processor.
> Mainly because there were 11 point releases of fixes and enhancements (4.11), as opposed to today where FreeBSD major version only have 3-4 point releases.
Mostly because 5.x took so long to be released.
IIRC, shortly after this experience the FreeBSD folks went from a feature-based release cycle to a time-based release cycle: everyone wanted feature X (and X and Y) in Next Release, and things got pushed and pushed.
So by having a steady cadence, a feature could be integrated regularly into HEAD, and folks didn't have to wait too long before a STABLE release was cut with all the latest and greatest stuff (that couldn't otherwise be backported because of compatibility guarantees).
Same experience which lead Java to change from feature-based release cycle to time-based release cycle later. Make regular releases, everything that is ready gets included, everything else can go onto the next train. Far better.
Wasn't this the last version before Matthew Dillon forked the project? I remember seeing some public arguments about SMP and few details of the kernel that resulted in fork and the new version 5.x.
In the very beginning I think they(df) used pgbench kind of benchmarks 'the smp' benchmark, as a lot of people were using postgres and was really easy to compare qps and transactions per second, of course it also tests ufs2 vs hammer. (if I remember correctly)
It was a long time ago, but freebsd5 felt more like a new OS than just a 4.11->5.0 bump, particularly with the removal of the giant lock and all the witness(4) work, took a while to figure out how to finetune it as a lot of systems were giant free but not all, and also of course moving one lock to many small locks means a lot of spinning and certain patterns of workloads are slower than before. It took until 7.0 to get amazing, and then in 8 or so I think it was super solid.
> But the reality is, nearly no one even does comprehensive OS benchmarking anymore - so there isn't really a good alternative source to use.
Imho benchmarks only get you so far.
If yu are any kind of serious about performance you should do your own testing and benchmarking. Benchmarks from other people should only help you selecting candidates platforms on which to run your own benchmarks.
Yeah a lot has changed since then. We had to keep track of stable branch to get close to Linux desktop features most of the time, sometimes even 'current' branch. Today you can use release for 99.9% of use cases, and upgrading releases minor or major is pretty straightforward and way faster than it used to be.
Likely also by comparison to FreeBSD 5 directly after it, which introduced concurrent kernel entry (instead of a single giant lock) and was considered somewhat buggy for a while. Or so I am told.
5.x was extremely buggy and unstable. It got usable around only after 5.4 release.
5.x wasn't just the removal of Giant Lock, but also:
- GEOM
- Kernel Scheduled Entities
There weren't any reasons to switch from 4.x to 5.x until 5.x got stable. Even with Giant Lock given the hardware at the time (Pentium with hyper threading) it wasn't that bad.
That's also the period Yahoo and other companies had a tremendous investment in FreeBSD.
FreeBSD is great on its own, and that amount of attention also meant every single bit of performance would be extracted no matter what it takes, and any scaling or stability issues would be hammered out pretty quick.
> I think the justifications were better support for running on Linux (storage drivers, Java, MySQL, oracle), better support for virtualization (although bsd jails are better than virtualization in my opinion, and a better fit for Y!), and it would be easier to support one os instead of two and acquisitions (including inktomi) really wanted to run on Linux.
Interesting, from that links i see some replies like this:
> Also, the FreeBSD license was more relaxed and commercial products (like NetAPP) could include and extend FreeBSD without disclosing their modifications.
and then (same comment):
> Our frustration with lack of support for FreeBSD moved us to choose Linux and Windows
I think these two things are strongly correlated: the bsd license allow companies to avoid contributing back improvements, and this prevents the main FreeBSD codebase from getting better.
I think that in the long run this is detrimental to projects.
> Early tests seem to indicate that a crash is indeed present in PS4 up to 11.00 included, and PS5 8.20 included. (Which would put the patch for this issue at firmwares PS5 8.40 and PS4 11.02)
I read the article and found it interesting to be honest (and enjoyed reading people's comments here on their experience with FreeBSD), although it is a rumour and I agree there are good reasons for disallowing/filtering those.
Mainly because there were 11 point releases of fixes and enhancements (4.11), as opposed to today where FreeBSD major version only have 3-4 point releases.
https://www.freebsd.org/releases/