I remember reading about full IPv4-space scans showing some fairly massive blocks that were allocated but unused. As I recall it was well over 10% of the overall space that could theoretically be reclaimed and redistributed with a hand-wave.
Anything that gets IPv6 adoption to mainstream is a Good Thing, but more likely we'll just start seeing the $1/mo/IP become $2/mo/IP, and upward... The squeeze will just continue as people make more money off of it, and we'll still need IPv4 addresses for compatibility with people running Windows XP in 2020.
>I remember reading about full IPv4-space scans showing some fairly massive blocks that were allocated but unused.
Just because an IP doesn't answer to someone - especially coming from the general internet - doesn't mean it's not in use. This should be obvious to anyone even with the tiniest amount of understanding of TCP/IP networking and IT in general. Hence the authors of such a 'scan' aren't that credible to me. Unless 'scan' is something totally different.
If a machine doesn't answer to the Internet, it has no need of a public IPv4 address. A machine that only answers to other machines in the same organization should be assigned an address in the 10.0.0.0/8 range. If the organization is not large, even 192.168.0.0/16 would do. In fact, many people would argue that assigning public IP addresses to intranet resources is very bad for security.
Of course, some of these machines actually might have valid justification for squatting on a public IPv4 address. Perhaps their firewalls are configured to drop all packets except those from a handful of "trusted" IP addresses, so a random scanner on the Internet gets no response.
But I doubt that such cases account for the majority of "seemingly unused" IPv4 blocks. What's more likely is that some large organization was assigned a massive block of IPs 20-30 years ago and never found much use for them. IBM owns 9/8. Xerox owns 13/8. HP owns 15/8. Apple owns 17/8. Ford owns 19/8. Several pharmaceutical and chemical companies also own an /8 each, as do some universities. Do they really need 16.7 million public IPv4 addresses? Of course not. But I wouldn't be surprised if they started to sell bits and pieces of their blocks once the price per IP goes up enough.
> A machine that only answers to other machines in the same organization should be assigned an address in the 10.0.0.0/8 range. If the organization is not large, even 192.168.0.0/16 would do. In fact, many people would argue that assigning public IP addresses to intranet resources is very bad for security.
That's a horrible hack and those people are wrong. Separate your concerns; addresses for addressing, firewalls for firewalling. Using private addresses adds extra complications; what if someone's home network uses the same range and they want to connect to your VPN? What if you merge with another company that's using the same range? What if you want to use FTP or SIP or any other protocol that uses the internet the way it was intended to connect to a server in a different office, are your packets going to make it through or not? You'll observe that private addresses have been deliberately left out of IPv6, for good reason.
B) I agree there are inconveniences and complications with using private addresses in ipv4. But it seems to be necessary thrift in the ipv4 world of rapidly expiring address space; using public routable ipv4 addresses for machines which do not communicate with the public internet is perhaps a luxury we can not afford, even in cases where to do otherwise is inconvenient or complicated.
These /8's have been historically been used internally there's no reason they should be handed back. Handing them back would just prolong the migration to IPv6 for limited benefit.
I imagine the only reason you would use a IPv6 private address is if you didn't have allocated global ones. It's just replacing no chance of collision with some chance of collision.
You are defending the broken windows fallacy. Wikipedia can explain better than I do, but the TLDR is simply : you are defending useless destruction of value, the end result is that everyone's poorer, nothing else.
No, just the opposite. There would be real costs for those companies that have /8s to move away from them. Meanwhile the gains would be minimal - at best it might allow some organizations to put off moving to IPv6 for a few months.
The squeeze will just continue as people make more money off of it, and we'll still need IPv4 addresses for compatibility with people running Windows XP in 2020.
Why? A router could still offer private IPv4 and encapsulate IPv4 packets in IPv6. The carrier can decapsulate IPv6 packets and perform NAT.
In fact, this is how my home cable connection works (per DS-Lite [1]). Our modem/router only gets an IPv6 address, but IPv4-only devices work fine.
I'm not sure if the OP referenced Win XP purely based on the IPv4->IPv6 discussion but Win XP is also extremely relevant for IP address space consumption due to not supporting TLS extension SNI [1].
SNI removes the requirement from HTTPS websites to host one domain per IP. This is done by having hostname part of the initial handshake.
Many websites or at least most (hopefully all) webapps these days serve over https, at least the ones that require user input like login. So there's the need for at least one public IP per site.
With SNI these could be all served from the same IP.
Now imagine CDN services like AWS CloudFront -- to support sites with SSL certificates you must use one IP per cert in EACH region or whatever the distribution granularity is.
Now this my friends is why AWS CloudFront asks $600/mo for each custom SSL cert domain and with SNI custom SSL it costs $0/mo.
As soon as all the 99.9% browsers/clients support SNI we'll be living in a better place with more free IPs. So we can finally distribute static content from CDN through custom SSL without the $600/mo pricetag.
It's naive to think everyone would go to SNI-based hosts with IPs shared with strangers but at least within the same datacenter for the same company IPs can be more easily conserved.
Yea, I should have been more specific. Chrome and Firefox DO have their own TLS with SNI and SNI is enabled for them in Win XP. The issue is relevant for all IE versions on WinXP.
10% wouldn't last us very long - that's less than two years at current growth rates, and growth is still accelerating. At some point the cost is higher than just getting on with switching to v6.
Yeah, IPv4 is pretty much empty. Lots of companies own a /8 all to themselves, like Xerox (13.0.0.0/8), Apple (17.0.0.0/8), USPS (56.0.0.0/8) and Ford (19.0.0.0/8) to name a few. None of them allocate even the tiniest portion of them.
There are usage requirements for IP addresses now. I have to substantiate the allocation.
I think if ARIN wanted to, they could give everyone a year to "substantiate their allocation" and set the policy something like "companies must return for reallocation any overallocations."
The risk of not returning an overallocation, well I'm not sure. ARIN certainly has teeth, and companies should simply be expected to correct these huge overallocations.
Just like the open source community comes together to solve serious problems, if we as a community enforced an ethical standard and some key people stood up raised this as an issue, I'm willing to bet ARIN could replenish a stockpile of IPv4 space.
So the question I'm asking is, since ARIN is empty, clearly they aren't interested in keeping a stockpile of addresses. Why not? I guess the more generous alternative is simply they have failed spectacularly at their stated goal.
I admit that I am not familiar with ARIN, but if it is anything like APNIC (Asia Pacific) then those big Class A Allocations are protected as legacy allocations. As such they are not required to give the legacy allocations up and its questionable if the RIR could even revoke them.
ARIN can go over the pool of post 1997 addresses they have allocated, but I think you would find much smaller unallocated blocks.
Take back those four and you can delay the v4 armageddon by about three months. Longer if you still apply the extra-strict rules used during the armageddon runup.
When the three months are up you can go looking for four more companies, and hire some more lawyers too.
There is no requirement for (historical) IP addresses allocations to be publicly routable. Unfortunately many companies own large allocations that they are quite rightly able to use for internal allocations.
The only way they are going to give them up is if the is worth their while financially.
You're right, the "Windows XP User on IPv4" is just an allegory of the critical mass of users that will keep us from dropping our IPv4 addresses many years into the future.
Put another way, what is the incremental value to provide access to your service to those IPv4 only users/devices? Whatever that value is, in theory you would be willing to pay a portion of that for access to IPv4 address space.
Luckily supply is not really constrained, so much as it is controlled. You can always get more IPs if you need them, but the cost associated I think will continue to increase... until enough people not only just support IPv6, but actually abandon their IPv4 addresses.
When the only devices that your software or service is designed to run on all support IPv6, then there's "no point" in having an IPv4 address. You almost have to get to the point where IPv4 is "not worth the trouble". And we are very, very far from that point I think. More to the point, more people are likely to think that it's IPv6 that is not worth the trouble.
Look at the other side of the coin though. If you have a service that only talks ipv6, you will be insulated from all the unsupported, unpatched, and trojan/virus laden XP machines for years to come!
My strongest incentive for implementing IPv6 at work is that Gmail is advertising MX hosts that have both IPv4 and IPv6 addresses. So on systems which are IPv6 capable, and where your OS either picks a random IPv4 or IPv6 address or prefers IPv6, you'll end up getting lots of noise when it tries IPv6 and fails and falls back to the next address if you don't have IPv6 connectivity set up.
Could of course just ignore that, but it's a good low pressure reminder to get around to sorting out IPv6...
10% gets us something like 6 months from memory - more than nothing, but not enough to hold off the inevitable!
Windows XP is a pain all right, the lack of SNI along with the limited v6 support means anyone using it really is stuck on v4.
Maybe the recent EOL for XP will cause a dramatic shift in the number of people continuing to use it? At least in the more well-off countries like the US, UK etc, opening the door for SNI :)
A lot of websites could use ipv6 to connect to a proxy/security provider such as cloudflare which would output the site to ipv4/ipv6 end users with all the other bells and whistles that cloudflare offers.
The end user then doesnt even know or care that the website he/she is using is ipv6
There's still some issues with HTTPS which need further roll-out to completely solve this problem (i.e. not everywhere supports SNI yet), but yes, for a lot of web sites this would work perfectly fine, and indeed is how many PaaS providers work.
Seems a little absurd that one company just got 1/4th of all remaining ARIN ipv4 adresses when there is such an insane crunch looming, no matter how big and important they are.
That's just an effect of how the roadmap works. The registries defined about number of points at which the allocation rules became/become stricter. They could have defined more points, ie. smaller and more frequent rule changes, but what would the benefit be, really?
You should know that subnets are not a past with IPv6. Unless you meant to say "I can't wait to have one big net for all of my $CORP".
Even then, I wouldn't rule out partitioning your big net - there is enough stupid software and OSes around which happily blabber (ie. broadcast) to the attached subnet. Becomes quite a nuisance when you've got hundreds of Nodes.
Haha, I'm laughing at that statement. You do realize that NATs merely extend the address space by mapping connections to computers with non-public IPs onto a uniquely identifiable port number? So depending on the average number of concurrent connections operated per peer, NATs won't be able to extend the IP4 address space by more than, say 12 bits.
NAT does tuple mapping - src/dst addr/port and protocol. That is - two TCP mappings can use the same local external port even if they go to the same remote address, for as long as they connect to a different remote port.
This is true, but consider that while there are 16 bits of ports on both source and destination, in reality, nearly all traffic flows to a very short list of ports. Most connections, and connections are what matters here, not total traffic, are going to have a port 80 or 443 on one side of the connection or the other. So while I think 12 bits is too low, in practical use, you aren't getting the 32 bits you think you are getting. Considering source port restrictions, I think saying single-level nat extends IPv4 with another 16 bits is not too far off. Of course, that is still a lot of IPs.
Of course, nat has a bunch of other pain in the ass problems, especially in that if I want to be able to track abuse, I've got to log every new connection (flow, whatever) that you make. When I get a complaint, I've got to match that up to my logs, which can be goddamn difficult if the complainer's clock isn't just right.
With static IPs it's way easier to track abuse, and I don't have to actively log what you are doing, just who has what IP when, and because IPs stick around a lot longer than connections, I'm way less vulnerable to clock drift.
Shortly after submitting my reply I had a similar thought but after some googling I cannot find any source that this kind of "port sharing" NAT is actually used. Looking at RFC 3022 "Traditional IP Network Address Translator" it looks like usually unique port numbers are allocated per connection. Maybe there are protocol subtitles or security implications that keep implementors from going with "port sharing" NAT?
My argument still stands in case all peers behind a common NAT router try to access the same IPv4 server (which may just happen with centralized services like youtube/facebook/google). It also stands in case UDP based services are used (stuff depending on a Cone NAT and using STUN like VoIP or online gaming).
All the proposed variants required that. Some hid the requirement a little better than others.
Some proposals were superset-like, so v4 and v6 addresses could ping each other. But not all v6 addresses. As soon as the v4 space was used up, those variants had to allocate v6 addresses that could not ping v4 addresses, so you got a sneaking incompatibility. Worse: you'd never know for sure whether there were any v4-only hosts left on the network.
Clean break with the new protocol NATing the old protocol. IPv4 addresses not behind the NAT are not accessible on the new internet. Exploit the asymmetry inherit in services vs clients.
This seems to be a problem that markets will solve pretty well: as the resource gets scarcer, the price will go up, meaning people will have an incentive to use it more efficiently (selling unused address space), and/or look for alternatives (IPv6).
The major cost of IPs is routing table bloat. Every time I announce a new block, every router on the internet (that carries a full table) needs more (very expensive) memory.
The size of the routing table has been growing faster than the cost of fast router memory has been falling for some time now.
I mean, if you are only pushing a gigabit of traffic (and /maybe/ 10 gigabits, especially if the packets are large.) it's not that big of a deal; you can use dram and CPUs with large caches, and it's fast enough. But if you own a real pipe and have to push 40 gigs, or really, even 10gigs of small packets, my pair of vyatta routers on Xeons just isn't going to cut the mustard.
It's kind of a 'tragedy of the commons' because when I buy IPs, that money goes to ARIN (or to the previous owner of those IPs) - none of that money goes to all the router owners who have to pay for more fast router memory - even though I'm costing those people money.
The problem with runout intersects with this. If I need, say, 4000 IPs, I can get one /20, and occupy only one routing slot, or I can get 16 /24s and occupy 16 routing slots. From my point of view, from the point of view of the person who owns the IPs, there really isn't much difference between one /20 and 16 /24s. But the rest of the internet has to pay 16x as much if I get 16 /24s.
eh, they are aware of the problem, but their solutions are still centralized ones; either preventing de-aggregation altogether, or making the end-user justify the usage, presumably to some central authority.
I mean, it might work out okay; that's pretty much what ARIN does now. I'm just saying, it's not exactly a market-based solution; that document proposes a market in IP addresses, but it largely leaves the routing table as a commons, even if it does propose to regulate that commons in ways that are similar to the way it is being regulated now.
Like I said: markets are not perfect, but what are the alternatives? I think it'll mostly work out ok: as prices go up, it'll push people to get serious about IPv6 and other alternatives.
My point is that the most important resources (routing table slots) are still allocated via informal central planning in your proposed scheme. And that isn't unreasonable; usually if you want a functional market, you need /something/ dealing with the externalities.
In the general case, sure, I like markets, too. It's just that markets deal very poorly with externalities, and I want to make the point that the way most people want to set up a market for IPs, routing table slots are externalities.
There is currently a process for selling IP addresses:
My understanding is that you give the previous owner enough money to make them happy, then you satisfy the requirements that you would have had to satisfy to get ARIN to give you the resources if you were requesting said resources from ARIN directly.
Not really; there's a mismatch between the people who incur the cost of getting IP address space, the content providers, and the people who would incur the biggest costs switching to IPv6, the connectivity providers.
Markets are usually not perfect, but tend to work decently even under imperfect conditions.
Edit: it's kind of mind boggling that people on a site about startups are so anti-market. Lord knows they don't solve all the world's problems, and all of us can think of examples where they don't work, but they generally work pretty well, and I don't see evidence that this is not the case here.
I am curious if this news affects how many HN readers which are involved into the product development are developing their products / websites with IPv6 support ?
If maybe this helps folks managing news.ycombinator.com to click the button and dualstack their site ? (Being behind CloudFlare, it is really just a click away, I am told).
It's our DC not broadband connection. We have several peers in the DC but not all are IPv6 and the level of competence is "variable". We had some techs not able to understand IPv6 even from a basic level and one peer screw up our routing.
If the certificate authorities can reduce the price of a SAN cert (multi-domains on a single IP, not to be confused with SNI), I guess a vast number of IPv4 can be saved, e.g. we host quite a number of client sites and we need IPv4 for each site just for SSL.
Ipv6 is still a terrible solution for so many reasons. Running out of addresses wasn't and and still actually isn't a problem, so stop pretending like it is.
IETF should have taken a pave-the-cowpaths approach and cleaned up ipv4, not created a huge incompatible ipv6 mess.
Lots of issues with (arguably broken) sites, that don't like it when your ipv4 address changes
A broken cable modem for month (mandatory, btw) that went into a freeze whenever the ipv6 prefix was renewed (not even changed, necessarily): Every 2-3 days the box locked up.
While I appreciate the 'No addresses left' argument, I was 'upgraded' to this crap. Before that I had a (dynamic) ipv4. My cable provider sits on ipv4 addresses and won't jump from providing cable in Germany into the huge Indian/Chinese market and run out of addresses.
Why do we allocate huge networks (was it a /10 recently?) to Akamai if we're scarce?
No way to access my home machines anymore. Dyndns is dead. I could implement that for ipv6 and that would be even better in theory, alas .. most people/networks/mobile carriers are on ipv4. So my AAAA is utterly useless, even if I keep it up to date.
In the end I'm a fan of ipv6. I ran ipv6 tunnels in the past, native ipv6 is cool. Somewhat. CGNAT is bullshit for the reasons above and more and forcing it down to your customers is a tough sell (-> The ISP loses goodwill here). For new or exploding markets? Probably no other way. Here it's just useless. Dual Stack is fine, DS Lite causes trouble again and again in my setup here.
And there's never an excuse for a known issue that requires your customers to unplug the cable modem you provided, because .. well .. it's not quite ipv6 ready yet. [1]
1: That was an issue for month, was solved around end of February. Support hotline knew about it, device manufacturer knew about it, official workaround see above.
In case you haven't noticed, we are running out of IPv4 addresses. CGNAT for v4 is inevitable one way or another. That is why we have IPv6. At least with DS-lite there is only one layer of NAT, compared to full dual-stack which usually implies two layers of NAT (CPE and CG). But all that shouldn't matter that much when you have nice unobstructed end-to-end connectivity with IPv6.
Anything that gets IPv6 adoption to mainstream is a Good Thing, but more likely we'll just start seeing the $1/mo/IP become $2/mo/IP, and upward... The squeeze will just continue as people make more money off of it, and we'll still need IPv4 addresses for compatibility with people running Windows XP in 2020.