Truth is we don't know that the subdomain got leaked. The example user agent they give says that the methodology they're using is to scan the IPv4 space, which is a great example of why security through obscurity doesn't work here: The IPv4 space is tiny and trivial to scan. If your server has an IPv4 address it's not obscure, you should assume it's publicly reachable and plan accordingly.
> Subdomains can be passwords and a well crafted subdomain should not leak, if it leaks there is a reason.
The problem with this theory is that DNS was never designed to be secret and private and even after DNS over HTTPS it's still not designed to be private for the servers. This means that getting to "well crafted" is an incredibly difficult task with hundreds of possible failure modes which need constant maintenance and attention—not only is it complicated to get right the first time, you have to reconfigure away the failure modes on every device or even on every use of the "password".
Here are just a few failure modes I can think of off the top of my head. Yes, these have mitigations, but it's a game of whack-a-mole and you really don't want to try it:
* Certificate transparency logs, as mentioned.
* A user of your "password" forgets that they didn't configure DNS over HTTPS on a new device and leaves a trail of logs through a dozen recursive DNS servers and ISPs.
* A user has DNS over HTTPS but doesn't point it at a server within your control. One foreign server having the password is better than dozens and their ISPs, but you don't have any control over that default DNS server nor how many different servers your clients will attempt to use.
* Browser history.
Just don't. Work with the grain, assume the subdomain is public and secure your site accordingly.
Something many people don't expect is that the IPv6 space is also tiny and trivial to scan, if you follow certain patterns.
For example, many server hosts give you a /48 or /64 subnet, and your server is at your prefix::1 by default. If they have a /24 and they give you a /48, someone only has to scan 2^24 addresses at that host to find all the ones using prefix::1.
Assuming everyone is using /48 and binding to prefix::1, that's a 2^16 difference with scanning the IPv4 address space. Assuming a specific host with only one IPv6 /24 block and delegating /64, this is a 2^12 difference. Scanning for /64 on the entire IPv6 space is definitely not as tiny.
AWS only allows routing /80 to EC2 instances making a huge difference.
It doesn't mean that we should rely on obscurity, but the entire space is not tiny as IPv4 was.
IPv6 address space may be trivial from this perspective, but imagine trying to establish two-way contact with a user on a smartphone on a mobile network. Or a user whose Interface ID (64 bits) is regenerated randomly every few hours.
Just try leaving a User Talk page message on Wikipedia, and good luck if the editor even notices, or anyone finds that talk page again, before the MediaWiki privacy measures are implemented.
> Subdomains can be passwords and a well crafted subdomain should not leak, if it leaks there is a reason.
The problem with this theory is that DNS was never designed to be secret and private and even after DNS over HTTPS it's still not designed to be private for the servers. This means that getting to "well crafted" is an incredibly difficult task with hundreds of possible failure modes which need constant maintenance and attention—not only is it complicated to get right the first time, you have to reconfigure away the failure modes on every device or even on every use of the "password".
Here are just a few failure modes I can think of off the top of my head. Yes, these have mitigations, but it's a game of whack-a-mole and you really don't want to try it:
* Certificate transparency logs, as mentioned.
* A user of your "password" forgets that they didn't configure DNS over HTTPS on a new device and leaves a trail of logs through a dozen recursive DNS servers and ISPs.
* A user has DNS over HTTPS but doesn't point it at a server within your control. One foreign server having the password is better than dozens and their ISPs, but you don't have any control over that default DNS server nor how many different servers your clients will attempt to use.
* Browser history.
Just don't. Work with the grain, assume the subdomain is public and secure your site accordingly.