Im interested in thoughts for a scenario where you want to do small-scale multi-site activities, with site-to-site connectivity.
Here’s a couple of constraints:
-
you’re not going to pay the money to get an assignment, you’ll just have ISP global.
-
your two or more sites will have different ISPs.
-
You’re doing VPN between sites instead of provider managed. The sites might be running some normal enterprise services like active directory, or other internal corporate norms.
-
you might have the need for a backup Internet connection. Load balancing would not be required.
With the fact that the globals could change at a site, would you consider using ULA? Or just stick with global and update DNS in the event of change. I know there’s a preference problem with ipv4 being chosen over ULA, so the ULA thing wouldn’t be very easy unless you went straight v6.
If ULA, would you pattern/convention match the global in each site or create one organization wide ULA and assign it something like /48 per site?
What precautions do you take on gateways to ensure globals aren’t used outside of the tunnel? ULA prevents this, but so does proper configuration I assume.
How would you do this?
I keep asking about ULA because I heard/read enough articles where the author says don’t do it, but they seem to be geared at large enterprise or hosting where they would definitely get dedicated blocks, peering, etc. I’m interested in the little guy.
To be clear, the issue with IPv4 being preferred over ULA is only an issue with calls to things like getaddrinfo which can return multiple results for a single host/domain name query.
The solution to this is easy — either don’t put your IPv4 addresses for hosts in your DNS, or give them a different name. I.e.: maybe you’re using hostname.domainname.xyz for your IPv6 addresses, and hostname.ipv4.domainname.xyz for your IPv4 addresses. Lookups against hostname.domainname.xyz will return ULAs in this case, as there is no A record for the same domain name.
People (many who should know better) act like machine A can somehow magically determine that machine B has an IPv4 address and just start using it instead of ULAs — but the precedence rules only apply to a DNS query the returns both an IPv4 and a ULA. Design your DNS so that this isn’t the case, and you won’t have to worry about it.
That’s a good point, I didn’t think about that.
This reminds me of this article about ULA. The TL;DR I got from it was: yeah, use ULA if you are a multi-sited organization, but you can’t afford PI space.
Quote:
In the meantime, we’ll have to use kludges like NAT66 and ULAs in mid-market IPv6 implementations, not because we love them, but because they’re the best tools we have at our disposal.
ULA is problematic with dual-stacked networks just as you have mentioned, although there are drafts trying to fix it. For now, you may have to consider running a NAT64 gateway in your network and go IPv6-only.
Do you have thoughts or experience with ULA planning for sites?
In v4 land, I took a private /18 in private space and assigned it to each of four sites as /20s.
Looking for any reason I couldn’t do the same.
It should be largely similar in v6 land. Generate yourself a random ULA /40 prefix - the randomness is here to prevent collisions should the network of your organisation merge with another.
Assign your sites a /48 each taken out of this /40 prefix. Try to future proof your addressing plan, remember that each /48 contains 65536 /64s, you can afford to “waste” them.
But also note that the “best practice” is to use ULA for intra/inter-site communications only. Since IPv6 hosts can be assigned multiple addresses, it is possible to assign them a GUA for communications with the wider internet, and a ULA for internal communications.
In reality though… Some machines may use their GUA as the source adddress even though the destination is ULA. Firewalling gets hairy. :(
Firewalling gets hairy. :(
This is what I worried about. The protocol advocates say this isn’t supposed to be the case, but early reports by organizations said they were seeing internal traffic on the Internet that were intended to be tunneled.
I guess the sites would need some pretty wide deny statements that block the types of traffic you don’t expect to leave.
I wonder if orgs block their own internal GUA at firewall to make sure their traffic gets tunneled? Or maybe they use GUA internally, but they block everything at the FW except the tunnel endpoints? If traffic escapes an ACL for tunnel protection, it gets blocked by default?
The main disadvantage of ULAs is in dual stack networks windows prefers IPv4 over them. In principle Linux should too but glibc follows an older RFC and as a result in practice picks ULAs over IPv4. If your GUA space is subject to change I would definitely recommend ULAs. Dynamic DNS is more headache than it’s worth. As others have mentioned I would keep IPv4 out of your internal DNS so that ULAs are preferred, if you want to dual stack your internal DNS then there are ways to configure clients to prefer ULAs over v4. Personally I run both ULAs and GUAs internally even with my own direct allocation but that’s because of dn42. What I do on my gateways to prevent leaks is I have a routing policy that returns an ICMP host unreachable if source is fd00::/8 and destination is 2000::/3 that way the gateway blocks any address mismatch. I also have a policy for the opposite GUA to ULA scenario. One other note, technically ULAs are supposed to be random /48s, others have mentioned generating a /40 but that’s not technically in spec. Ideally you would generate one /48 per site or use a single /48 and then do a /56 per site. Obviously do what you want and what makes the most sense for you but I’m going to put that info out there.
This makes a lot of sense. Thanks.
I’ll take a stab at this, although I profess no professional experience in network management and my interest in IPv6 is primarily in advocacy and in homelabs. With that said…
This seems like a situation where ULAs make some sense, but only for inter-site traffic. So I’d probably first provision for global addressing from each site’s ISP. Then overlay ULA addressing to those sites as well, carving /48 subnets using the procedure from an RFC that I can’t remember, so that the prefixes are randomized. This reduces the chance that a future site will conflict, as well as any conflicts due to mergers and acquisitions of the company.
Keeping the ULA traffic off the WAN is relatively easy with a reject rule on egress, and the ISP should be dropping that traffic anyway. Keeping global traffic off of the tunnels can be done the same, with reject rules if a non ULA src or dst address is used.
DNS for internal site resources can use ULA just fine, unless you absolutely need reverse DNS. For resources needing exposure externally, I’m not sure if this plan accommodates that.
TL:DR: use both ULA and global addresses. Routing rules to reject.
DNS for internal site resources can use ULA just fine, unless you absolutely need reverse DNS. For resources needing exposure externally, I’m not sure if this plan accommodates that.
Split-horizon DNS is a thing that exists. This part is a solved problem.