As someone working on an NTP implementation (specifically ntpd-rs) I have to add some context to this: I do believe that donating to the Network Time Foundation is fine, but it is not required to keep the Network Time Protocol up in any way.
Firstly, the most important reason the ntp.org domain name is so well known is because of the NTP pool, which is an entirely separate project (the Network Time Foundation calls it an associated project), which was allowed to use the `pool.ntp.org` domain name, but does not directly receive significant funding from the Network Time Foundation as far as I understand (I do not know the details of the domain name arrangement). That pool project was developed independently of the Network Time Foundation and is run by a different group of volunteers, mostly being developed and maintained by Ask Bjørn Hansen and hosting servers entirely consisting of (sometimes professional) volunteer operators. This is what many NTP implementations, specifically many Linux distributions, use as their standard source of time. But it does not appear to depend much on the Network Time Foundation for continued existence.
Secondly, despite all the claims made on the Network Time Foundation site, the IETF took over development and maintenance of the NTP protocol for something like two decades now already under the NTP working group. This was all done with the Network Time Foundation fully agreeing this was the way forward. But for some reason they still consider themselves exempted from any process that the IETF uses and consider themselves as the true developers of the protocol. They constantly frustrate the processes that the IETF uses, claiming that they should receive special treatment as being the 'reference implementation'. Meanwhile, the IETF NTP WG does not have a concept of the reference implementation at all, instead considering all NTP implementations equal.
Aside from this frustrating stance, the Network Time Foundation also didn't do much work on trying to forward the standard at all, instead relying on the status quo from the late 90s and early 2000s. Meanwhile the IETF NTP WG worked on standardizing a way to secure NTP traffic (with regular NTP traffic being relatively easy to man in the middle, with older implementations even being so predictable that faking responses didn't even need reading the requests). That much more secure standard, NTS, was fully standardized in September of 2020, but the Network Time Foundation continues to not implement this standard. All of this has resulted in almost every Linux distribution that I know of replacing their ntpd implementation with NTPsec (with ntpd not even being available as an alternative anymore for installation).
Meanwhile people also started working on NTPv5, in order to remove some of the unsafe and badly defined parts of the standard, and in general bring the spec back up to date. As part of this process, it was decided some time ago that in contrast to the previous NTP standards, the algorithms specifying what a client should do in order to synchronize the time should be removed from the standard (the algorithms specified in the previous standards were not being used by any implementation, not even the ntpd implementation by the Network Time Foundation itself). NTPv5 instead focuses on the wire format of NTP packets and the simple interactions between parties. Yet despite there having been a consensus call on this, and despite no current implementation following the exact algorithm as specified in NTPv4, the Network Time Foundation continues to frustrate the process by claiming that these algorithms are an essential part of the standard.
All of this frustration was also a large part of why the PTP protocol was eventually developed at the IEEE. That is to say: even though the operating mode of PTP is often quite different to that of NTP these days, the information that needs to be transferred is essentially the same, and the packets could have trivially been defined to be the same as long as NTP had built in a little bit of additional flexibility a little bit earlier. This would have also helped NTP in the end (with for example hardware timestamping only being implemented for PTP right now, even though it could have been just as useful in NTP), and with PTP now also aiming to introduce a simpler client-server model via CSPTP that looks a whole lot like what NTP was trying to achieve all this time with its most used operating mode.
It is my belief that the Network Time Foundation continues to push themselves in a corner of more and more irrelevance even though that did not need to be. The historical significance of David Mills' ntpd implementation is definitely there, and we should applaud the initial efforts and their focus on keeping the protocol open and widely available. And I do believe that the current people at the Network Time Foundation could still provide more than enough valuable input in the standardization process, but they cannot claim anymore to be the sole developers of the NTP protocol. Times have changed, there are now multiple implementations with an equally valid claim. Especially with GNSS (specifically GPS) being under attack more and more these days, we need alternative ways of synchronizing computer clocks to a standard time in a secure way. NTP and NTS are perfectly positioned to take on that task and we need to make sure that we keep the standard up to date for our evolving world.
Edit: if you want something else to donate to, I would consider donating to the IETF, NTPsec, or maybe donating some time to the NTP pool. I would also link to donations for Chrony (one of the other major NTP server implementations) but they do not appear to offer anything. Linking to my own project's donation page does not seem fair considering the contents of this post.
The ntp pool is actually independently run and funded and has nothing to do with the NTPd implementation nor the NTP Foundation, other than them allowing the pool to use that DNS name.
The features we specifically don’t support are those related to direct LDAP support within sudo, so things like loading a sudoers file directly from LDAP. Sudo-rs will use any user retrieved via NSS, such as when configured using SSSD to load LDAP users. And from the authentication side you can use whatever PAM supports, so anything like Kerberos etc, which again can be coupled with the same LDAP database.
I would argue compile time changes don't matter much, as the amount of data going through zlib all across the world is so large, that any performance gain should more than compensate any additional compilation time (and zlib-rs compiles in a couple of seconds anyway on my laptop).
As for dependencies: zlib, zlib-ng and zlib-rs all obviously need some access to OS APIs for filesystem access if compiled with that functionality. At least for zlib-rs: if you provide an allocator and don't need any of the file IO you can compile it without any dependencies (not even standard library or libc, just a couple of core types are needed). zlib-rs does have some testing dependencies though, but I think that is fair. All in: all of them use almost exactly the same external dependencies (i.e.: nothing aside from libc-like functionality).
zlib-rs is a bit bigger by default (around 400KB), with some of the Rust machinery. But if you change some of that (i.e. panic=abort), use a nightly compiler (unfortunately still needed for the right flags) and add the right flags both libraries are virtually the same size, with zlib at about 119KB and zlib-rs at about 118KB.
I also agree that there is too much churn in the Rust ecosystem and that we should try and slow things down in the coming years. ntpd-rs also does this: our MSRV is 1.70 right now (that was released over a year ago) and we test our code on CI against this version (as well as the current stable release). And we go a little further. Using the `direct-minimal-versions` (nightly only right now unfortunately) flag we downgrade our dependencies to the minimal ones we've specified in our `Cargo.toml` and test against those dependencies as well, as well as the latest dependencies specified in `Cargo.lock` which we update regularly. This allows us to at least partially verify that we still work against old versions of our dependencies, allowing our upstream packagers to more easily match their packages against our own. Of course we all should update to newer versions whenever possible, but sometimes that is hard to do (especially for package maintainers in distributions such as Fedora and Debian, who have to struggle with so many packages at the same time) and we shouldn't create unnecessary work when its not needed. Hopefully this is our way of helping the ecosystem slow down a little and focus more on security and functionality and less on redoing the same thing all over again every year because of some shiny new feature.
I'm afraid this is a pretty common sentiment. NTS has been out for several years already and is implemented in several implementations (including our ntpd-rs implementation, and others like chrony and ntpsec). Yet its usage is low and meanwhile the fully unsecured and easily spoofable NTP remains the default, in effect allowing anyone to manipulate your clock almost trivially (see our blog post about this: https://tweedegolf.nl/en/blog/121/hacking-time). Hopefully we can get NTS to the masses more quickly in the coming years and slowly start to decrease our dependency on unsigned NTP traffic, just as we did with unencrypted HTTP traffic.
Our project also includes a PTP implementation, statime (https://github.com/pendulum-project/statime/), that includes a Linux daemon. Our implementation should work as well or even better than what linuxptp does, but it's still early days. One thing to note though is that NTP can be made to be just as precise (if not more precise), given the right access to hardware (unfortunately most hardware that does timestamping only does so for PTP packets). The reason for this precision is simple: NTP can use multiple sources of time, whereas PTP by design only uses a single source. This gives NTP more information about the current time and thus allows it to more precisely estimate what the current time is. The thing with relying purely on GNSS is that those signals can be (and are in practice) disrupted relatively easily. This is why time synchronization over the internet makes sense, even for large data centers. And doing secure time synchronization over the internet is only practically possible using NTP/NTS at this time. But there is no one size fits all solution for time synchonization in general.
I do think that memory safety is important for any network service. The probability of something going horribly wrong when a network packet is parsed in a wrong way is just too high. NTP typically does have more access to the host OS than other daemons, with it needing to adjust the system clock.
Of course, there are many other services that could be made memory safe, and maybe there is some sort of right or smart order in which we should make our core network infrastructure memory safe. But everyone has their own priorities here, and I feel like this could end up being an endless debate of whatabout-ism. There is no right place to start, other than to just start.
Aside from memory safety though, I feel like our implementation has a strong focus on security in general. We try and make choices that make our implementation more robust than what was out there previously. Aside from that, I think the NTP space has had an under supply of implementations, with there only being a few major open source implementations (like ntpd, ntpsec and chrony). Meanwhile, NTP is one of those pieces of technology at the core of many of the things we do on the modern internet. Knowing the current time is one of these things you just need in order to trust many of the things we take for granted (without knowledge of the current time, your TLS connection could never be trusted). I think NTP definitely deserves this attention and could use a bunch more attention.
I agree that amplification and reflection definitely are worries, which is why we are working towards NTS becoming a default on the internet. NTS would prevent responses by a server from a spoofed packet and at the same time would make sure that NTP clients can finally start trusting their time instead of hoping that there are no malicious actors anywhere near them. You can read about it on our blog as well: https://tweedegolf.nl/en/blog/122/a-safe-internet-requires-s...
One thing to note about amplification: amplification has always been something that NTP developers have been especially sensitive to. I would say though that protocols like QUIC and DNS have far greater amplification risks. Meanwhile, our server implementation forces that responses can never be bigger than the requests that initiated them, meaning that no amplification is possible at all. Even if we would have allowed bigger responses, I cannot imagine NTP responses being much bigger than two or three times their related request. Meanwhile I've seen numbers for DNS all the way up to 180 times the request payload.
As for your worries: I think being a little cautious keeps you alert and can prevent mistakes, but I also feel that we've gone out of our way to not do anything crazy and hopefully we will be a net positive in the end. I hope you do give us a try and let us know if you find anything suspicious. If you have any feedback we'd love to hear it!
> I cannot imagine NTP responses being much bigger than two or three times their related request.
I think you must be limiting your imagination to ntp requests related to setting the time. There are a lot of other commands in the protocol used for management and metrics. The `monlist` command was good for 200x amplification.
https://blog.cloudflare.com/understanding-and-mitigating-ntp...
Ah right! I always forget about that since we don’t implement the management protocol in ntpd-rs. I think it’s insane that stuff should go over the same socket as the normal time messages. Something I don’t ever see us implementing.
I don’t think our dependency tree is perfect, but I think our dependencies are reasonable overall. We use JSON for transferring metrics data from our NTP daemon to our prometheus metrics daemon. We’ve made this split for security reasons, why have all the attack surface of a HTTP server in your NTP daemon? That didn’t make sense to us. Which is why we added a readonly unix socket to our NTP daemon that on connecting dumps a JSON blob and then closes the connection (i.e. doing as little as possible), which is then usable by our client tool and by our prometheus metrics daemon. That data transfer uses json, but could have used any data format. We’d be happy to accept pull requests to replace this data format with something else, but given budget and time constraints, I think what we came up with is pretty reasonable.
Probably, but we still need to parse that string on the client side as well. If you’re willing to do the work I’m sure we would accept a pull request for it! There’s just so many things to do in so little time unfortunately. I think reducing our dependencies is a good thing, but our dependencies for JSON parsing/writing are used so commonly in Rust and the way we use it hopefully prevents any major security issues that I don’t think this should be a high priority for us right now compared to the many things we could be doing.
Firstly, the most important reason the ntp.org domain name is so well known is because of the NTP pool, which is an entirely separate project (the Network Time Foundation calls it an associated project), which was allowed to use the `pool.ntp.org` domain name, but does not directly receive significant funding from the Network Time Foundation as far as I understand (I do not know the details of the domain name arrangement). That pool project was developed independently of the Network Time Foundation and is run by a different group of volunteers, mostly being developed and maintained by Ask Bjørn Hansen and hosting servers entirely consisting of (sometimes professional) volunteer operators. This is what many NTP implementations, specifically many Linux distributions, use as their standard source of time. But it does not appear to depend much on the Network Time Foundation for continued existence.
Secondly, despite all the claims made on the Network Time Foundation site, the IETF took over development and maintenance of the NTP protocol for something like two decades now already under the NTP working group. This was all done with the Network Time Foundation fully agreeing this was the way forward. But for some reason they still consider themselves exempted from any process that the IETF uses and consider themselves as the true developers of the protocol. They constantly frustrate the processes that the IETF uses, claiming that they should receive special treatment as being the 'reference implementation'. Meanwhile, the IETF NTP WG does not have a concept of the reference implementation at all, instead considering all NTP implementations equal.
Aside from this frustrating stance, the Network Time Foundation also didn't do much work on trying to forward the standard at all, instead relying on the status quo from the late 90s and early 2000s. Meanwhile the IETF NTP WG worked on standardizing a way to secure NTP traffic (with regular NTP traffic being relatively easy to man in the middle, with older implementations even being so predictable that faking responses didn't even need reading the requests). That much more secure standard, NTS, was fully standardized in September of 2020, but the Network Time Foundation continues to not implement this standard. All of this has resulted in almost every Linux distribution that I know of replacing their ntpd implementation with NTPsec (with ntpd not even being available as an alternative anymore for installation).
Meanwhile people also started working on NTPv5, in order to remove some of the unsafe and badly defined parts of the standard, and in general bring the spec back up to date. As part of this process, it was decided some time ago that in contrast to the previous NTP standards, the algorithms specifying what a client should do in order to synchronize the time should be removed from the standard (the algorithms specified in the previous standards were not being used by any implementation, not even the ntpd implementation by the Network Time Foundation itself). NTPv5 instead focuses on the wire format of NTP packets and the simple interactions between parties. Yet despite there having been a consensus call on this, and despite no current implementation following the exact algorithm as specified in NTPv4, the Network Time Foundation continues to frustrate the process by claiming that these algorithms are an essential part of the standard.
All of this frustration was also a large part of why the PTP protocol was eventually developed at the IEEE. That is to say: even though the operating mode of PTP is often quite different to that of NTP these days, the information that needs to be transferred is essentially the same, and the packets could have trivially been defined to be the same as long as NTP had built in a little bit of additional flexibility a little bit earlier. This would have also helped NTP in the end (with for example hardware timestamping only being implemented for PTP right now, even though it could have been just as useful in NTP), and with PTP now also aiming to introduce a simpler client-server model via CSPTP that looks a whole lot like what NTP was trying to achieve all this time with its most used operating mode.
It is my belief that the Network Time Foundation continues to push themselves in a corner of more and more irrelevance even though that did not need to be. The historical significance of David Mills' ntpd implementation is definitely there, and we should applaud the initial efforts and their focus on keeping the protocol open and widely available. And I do believe that the current people at the Network Time Foundation could still provide more than enough valuable input in the standardization process, but they cannot claim anymore to be the sole developers of the NTP protocol. Times have changed, there are now multiple implementations with an equally valid claim. Especially with GNSS (specifically GPS) being under attack more and more these days, we need alternative ways of synchronizing computer clocks to a standard time in a secure way. NTP and NTS are perfectly positioned to take on that task and we need to make sure that we keep the standard up to date for our evolving world.
Edit: if you want something else to donate to, I would consider donating to the IETF, NTPsec, or maybe donating some time to the NTP pool. I would also link to donations for Chrony (one of the other major NTP server implementations) but they do not appear to offer anything. Linking to my own project's donation page does not seem fair considering the contents of this post.