I built a 16TB raidz home office server this weekend using Ubuntu and ZFS on Linux[0]. It worked great out of the box. I was even able to import a pool created on another server without any problem. Of course, your mileage may vary.
I was using FreeNAS previously (mainly for the ZFS support to keep my data safe and not spend a bunch on raid controllers) and kept getting bogged down by feeling the need to grok jails. I think jails are terrific in theory, but a pain to work with if you're not intimately familiar with them. Maybe it's just the way it works on FreeNAS, but newly created jails (by default on FreeNAS) were getting a new virtual IP addresses, which really threw me for a loop. Add to that frustration trying to get all the permissions correct just to make a few different services work together started to get really painful.
The drop-dead simplicity of setting up exactly what I had previously on a fresh Ubuntu box with native ZFS port really warmed my cockles.
Think of jails as VMs without the overhead of having the same OS multiple times in memory. Similarly you can't use host's IP by any of the guests.
That said many people go around that by simply binding the jail against an unsused loopback address (127.0.0.0/8) and then use firewall such as pf to redirect specific ports to given jail, like here http://blog.burghardt.pl/2009/01/multiple-freebsd-jails-shar...
I have been doing lots of research on this recently and here is the main thing that makes ZFS win every time:
When you have a RAID of any kind you need to periodically scrub it, meaning compare data on each drive byte by byte to all other drives (let's assume we are talking just about mirroring). So if you have two drives in an mdadm array and the scrubbing process finds that a block differs from drive A to drive B, and neither drive reports an error, then the scrubber simply takes the block from the highest numbered drive and makes that the correct data, copying it to the other drive. What's worse is that even if you use 3 or more drives, Linux software RAID does the same thing, despite having more info available. On the other hand, ZFS does the scrubbing by checksums, so it knows which drive has the correct copy of the block.
How often does this happen? According to what I have been reading, without ECC RAM and without ZFS, your machines get roughly one corrupt bit per day. In other words, that could be a few corrupt files per week.
My conclusion is that as I am building my NAS, I want ECC RAM and ZFS for things I cannot easily replicate.
Just to make it clear. raid-5/6 mdadm arrays does the right thing when repairing/checking/scrubbing data. It writes the correct data if one of the drives has a corrupted block.
How often does this happen? According to what I have been reading, without ECC RAM and without ZFS, your machines get roughly one corrupt bit per day. In other words, that could be a few corrupt files per week.
This is complete nonsense without more data to back it up.
> Just to make it clear. raid-5/6 mdadm arrays does the right thing when repairing/checking/scrubbing data.
This is inherent to RAID-5/6. Doesn't really have anything to do with mdadm other than mdadm implements RAID-5/6. And now you probably have a write hole.
Just to make it clear: on raid 5/6 parity isn't checked on reads, so to get your "right thing when repairing/checking/scrubbing data" you'd have to do a full parity rebuild. This isn't anything like what ZFS does.
It's really the integrity checking. As you say, mdadm is much more suited if you need to change the geometry of you array, add disks and so on. It's much handier for the smaller business, who can't afford a second send of disks to build a second array on when they want to reshape.
We ran btrfs on top of mdadm, getting both integrity checking and flexibility (although this just tells you that something is wrong).
There is a great advantage to combining the filesystem with the disk mapper. You don't have to use different commands to add and grow disks and the partitions upon those disks. Your filesystem knows about what it's living on and stores data accordingly. ZFS has more advanced file system properties, like sending snapshots, even of block devices. BTRFS is still working on feature parity with this. ZFS is much more stable than other FS with similar features.
The big disadvantage is the memory and CPU requirement. If your server has plenty of memory and CPU, I'd use ZFS. If you're running on an ARM NAS with 128MB RAM, I'd use something less fancy.
I think the primary advantage is that ZFS collapses all the standard filesystem abstractions. With mdadm or hardware raid you have a raid controller (which could be mdadm), volume manager (i.e. lvm), and filesystem (ext4, xfs, etc). ZFS combines all of that into one. It's really a different philosophy, but means that things like creating a new filesystem is almost instant (and CoW, snapshots, replication are all easy - although perhaps that's possible with the traditional abstractions as well).
I do the same and use an HP ProLiant microserver (it was dirt cheap). ZFSOnLinux just worked out of the box and has kept working ever since.
Word to the wise: Read about the block size of your disks, mine are newer and needed a block size different than the default but I didn't know about it, and now they are slower than they could be. I don't remember details, but you will definitely find it in a cursory search.
Same as StavrosK -- HP Micro Server. The 16TB box is an N40L. I bought it awhile ago, but haven't needed extra storage until recently.
I also have an N36L running FreeNAS/ZFS on 4x2TB disks which I've used for about 1.5 years. It finally filled up (SO has data intensive profession) so I was forced to pony up. It's still being used every day and has no signs of failing.
They're both great home servers. I sometimes wish they had more CPU, but for the money, HP is basically giving them away.
As a side-side note, the HP Microservers (N54L[]) are currently under cashback promotion afaik in UK ('til end of sep) and Italy ('til end of oct).
In Italy it's typically 199€, with a 40€ cashback (so 159€), in UK i think it's 190£ with 90£ cashbash so around 100£ final. I don't know if there's something similar in other countries.
[] there are 3 HP microserver versions afaik, the old N40L, the N54L and the new just released Gen8 that is way more expensive (500$ vs 200$)
As another point of reference, I have a ProLiant Microserver running FreeBSD with 4x3TB drives in RAID-Z. I installed a custom BIOS that removes the limitations on the optical SATA port and run a small single drive for the OS off that.
It's a dream to use and gives me 8TB of usable storage. I can easily hit saturation of the NIC and it's nice and small and quiet, fitting in the bottom corner of my bookshelf.
I was using FreeNAS previously (mainly for the ZFS support to keep my data safe and not spend a bunch on raid controllers) and kept getting bogged down by feeling the need to grok jails. I think jails are terrific in theory, but a pain to work with if you're not intimately familiar with them. Maybe it's just the way it works on FreeNAS, but newly created jails (by default on FreeNAS) were getting a new virtual IP addresses, which really threw me for a loop. Add to that frustration trying to get all the permissions correct just to make a few different services work together started to get really painful.
The drop-dead simplicity of setting up exactly what I had previously on a fresh Ubuntu box with native ZFS port really warmed my cockles.
[0] http://zfsonlinux.org/