Well, if it's any comfort, it's usually not this bad. I only usually come across this on Hacker News, which is why I have to limit my time here for my own mental health. Some people are so dogmatic about capitalism here that they become indistinguishable from sociopaths.
It's why I call this[1] "The Hacker News Trolley Problem".
Filippo should get to work! The design part of age is the hard part; the actual programming is, I think? maybe one of the easier problems in cryptography (encrypt a single file with modern primitives).
I am a little bit giving Filippo shit here but one concern I have about talking "age" up is that I'm at the same time talking the problem of encrypting a file up more than it needs to be, so that people have the impression we'd have to wait, like, 5 years to finally see something do what operating systems should have been doing themselves all this time.
I built a very similar home NAS with the newer RockPro64 and a pair of 4TB HDDs in RIAD1, all on top of Debian[0]. I found OpenMediaVault to be overkill and kept me from really understanding what was going on. Plus there are a million guides to setting up SMB, rsync, Borg, etc.
The RockPro64 hardware is great. Very performant, especially when using the PCIe to SATA card instead of USB like the OP did.
I asked this on another thread, but I don't understand why you would see real world difference between PCIe vs USB3.
When using this setup as a NAS, isn't your bottleneck the gigabit network from machine to machine? You'll saturate the 1 Gigabit link ( or even 2 Gigabit duplex ) before you can reach 5 Gigabits ( USB3 ) or 6 Gigabits ( PCIe )
RAID seems very popular for 2 disk setups, but I don't like the cost benefit. With RAID under about 5 disks, you aren't getting much speedup, under 4 disks you can't do a proper fail-out and rebuild of a larger multi-disk volume.
I use 2 x 4TB disks in my homeserver, but I keep one online and have a script that brings up the other one periodically and rsyncs everything. This gives me a local backup, something RAID lacks, so I'm protected from fat fingering or accidentally deleting stuff. I also have very minimal downtime, because I can mount that drive in the place of the primary drive in just afew seconds.
I run xfs on the primary drive, and btrfs on the mirror, so I can take snapshots after I rsync and maintain differentials easily.
My point is, you should consider getting rid of RAID and just use the bare drive or LVS Volume.
Agreed, RAID makes little sense if you want a quiet home storage that is used only occasionaly. Additional disk is better used for regular backups. If you need to minimize downtime and have the storage accessible 24/7, then RAID makes sense.
FWIW, I run 10 x 4TB raidz2 NAS with 4GB using ZFS on Linux.
ZFS on Linux isn't as memory-efficient as it would be on FreeBSD, which has a much closer file system cache / memory manager architecture to Solaris, but it works fine.
ZFS memory consumption really rockets when you want to use dedupe, which basically wants to store a hash of every block in memory. The sweet spot for its use is in things like multiple VM images - where there's a lot of duplication inside large files that are otherwise different. But there's often ways to structure things to gain back the same space, e.g. with stackable file systems.
Without dedupe, and for a NAS scenario where there isn't going to be a massive working set (typically source or sink for backups, streaming video, etc.), 4G has been more than sufficient for me for years.
Have you considered Btrfs? I'm using it on my primary OS drive and my backup data drive. I keep my primary data on xfs to give me some cross platform resiliency. Btrfs allows me to take snapshots and uses space very efficiently. I haven't tried the deduplication features.
Skipfish and wapiti are quite a bit different than this. They are simply blackbox scanners that attempt to crawl a webpage and look for common issues, mostly by trial and error. This is examining the source code of an application, finding all the sinks, building a graph, and working backwards through the graph to find sources that can pass input to the vulnerable sinks.