$Id: Issues.html,v 1.142 2016/09/29 15:06:32 gdmr Exp $
This project is to conduct an initial investigation into the issues and likely effort involved in bringing in IPv6 support. This page lists (some of!) the issues as they were understood at the time of writing. It will be annotated and added to through the life of the project. It is expected that a number of follow-on projects will result to cover specific work items which are identified. The project's index page has some useful additional links, and the final report is here.
Note that it is not being suggested that IPv4 should be phased out any time soon. On the contrary, dual-stack running is the most likely outcome for several years yet.
Note also that as things progress items in this document may be snipped, changed or struck out in the light of experience or decisions taken.
Thanks to Sam for some very useful comments on previous drafts. Usual disclaimers apply!
Given that we're managing to get along with IPv4, why do we want to add IPv6 support? There are a few reasons:
(See also the separate document on IPv6 unicast addresses.)
This is the most obvious change from IPv4: addresses are now 128-bits, and are written in 4-hexit chunks, separated by colons. Leading zeros are suppressed, and the largest all-zero block can be elided to "::".
Loopback is ::1/128.
The University's IPv6 block is 2001:630:3c1::/48. Sam's current thinking is to split this off by (BCD-encoded) VLAN: 2001:630:3c1:160/64, for example. Non-VLAN IPv4 subnets can be assigned IPv6 network numbers as required.
Link-local addresses: these come from fe80::/10, and are usually based on the MAC address. You can always use these to talk to immediate neighbours. They're also used in neighbor (sic) discovery ("ND", like ARP) and DHCPv6, inter alia. Multicast-DNS services (e.g. bonjour, avahi) might also advertise these, though for managed machines this may not matter.
There are various other classes and scopes of IPv6 addresses. It's proposed to ignore these for now as much as possible, and to take another look at how we might extend our use of IPv6 once the whole thing is up and running and bedded in. See RFC 4291 for the details!
There's no reason not to support it in such a way that potentially every machine could have (and use) a global IPv6 address. Or indeed many IPv6 addresses each. And that's the model we should be aiming for.
However, there's no reason why we would actually have to give an IPv6 global address to everything. Network switches, for example, might very well be able to get along with only link-local IPv6 addresses (alongside their existing IPv4 addresses). Printers might not even need IPv6 addresses at all.
(See also the separate document on IPv6 unicast addresses.)
How?
However you get your address, it'll be validated by DAD ("Duplicate Address Detection") before you're allowed to use it.
We need this so that we can audit traffic and tie addresses to the corresponding switch ports.
For IPv4 we use arpwatch.
There's ndwatch, perhaps known as NDPMon,
in sourceforge, which might be worth investigating. It appears to have
been packaged for Ubuntu, though not for RedHat.
We have packaged up addrwatch
to provide some similar functionality.
Alternatively we might harvest the core switches' tables, though this gives a bit less in the way of immediacy of notification as well as potentially missing short-lived address mappings.
Some of these are this way because we don't need them to be globally routed. Some of them are this way because we deliberately don't want them to be routed for some reason. We'll have to find a way to deal with the latter group. Filtering at the edge is likely to be the answer.
IPv6 incorporates IPsec: specifically AH and ESP are part of the base spec. This is mostly a host issue (but see below re OSPFv3), where we should eventually look at IKE.
(Note: we're considering static CO-assigned addresses here. For dynamic addresses handed out by DHCP we might want to make different choices.)
This is a fundamental decision that we will have to take fairly early
on, as the more machines with (global) IPv6 addresses that we have the
harder it will be to renumber later.
The very first thing we have to decide is whether we want to try to
link the IPv4 and IPv6 addresses in some way, or whether we would be
happy to have them be completely independent.
The development meeting held on 2nd September 2015 decided there was no obvious need to link addresses, and some good reasons not to. With hindsight and some experience, it really wouldn't have been a good idea to have tried to link them.
Since the IPv4 and IPv6 stacks run totally independently (apart from address selection when connecting) the only reason we might want to maintain a link would be for our own convenience, both when assigning addresses through dns/inf and when interpreting them ourselves.
If they are to be linked then we would need a scheme whereby the
IPv4 address could be translated to a corresponding IPv6 address, and
back again, in an obvious way.
This was the approach chosen at the Development Meeting held on 2nd September 2015. With experience, it turns out to have been eminently sensible!
On the other hand, if we decide we don't need to link the IPv4 and IPv6 addresses then we can just number arbitrarily within each /64 subnet. This approach has its attractions too.
And given that IPv6 address space is plentiful, there's scope for creative use!
We'll need to have some way to assign fixed IPv6 addresses to hosts. Several ways come to mind:
(In any case, makeDNS will have to know how to implement our preferred
IPv4-to-IPv6 mapping scheme. See above.)
We would also need a way to add IPv6-only entries, ideally in a way
better than simply adding "#verbatim" AAAA RRs.
makeDNSv6
). It doesn't
replace makeDNS
for IPv4, but rather works in tandem.
Once we have the addresses in our zones, our current versions of bind doesn't have any problem with serving them out.
Should we bother to continue to generate reverse mappings for dynamic DHCP-leased addresses? That's a question that we can defer to the DHCP sub-project.
lcfg-dns
is in serious need of an IPv6
makeover!
At the moment we have lcfg-dns set
up /etc/gai.conf to suit our IPv4-only
setup. There might be changes needed here in the light of experience
with a live parallel IPv6 stack. It turns out that this just
works for IPv6 using the default rules, as augmented by us for IPv4,
and no change is necessary. In any case
RFC 6724 has added some
additional entries to the default policy table which we should track
in lcfg-dns.
The following of our switches support IPv6:
The following of our switches DO NOT support IPv6:
These are also used as Forum "phones" switches, though there it's probably
not actually needed. We should cut back on the VLANs they
carry though.
The other one is in closet 2A, and is set up as a standby router for the Forum, against the possibility that all of the power in B.02 goes out. We might decide we don't actually need to have it, but if we do want to keep the functionality we'll have to replace it with something new. Something might drop out of a mooted core replacement at some point.
We wouldn't expect to assign IPv6 addresses to the edge switches, unless it turned out that some feature didn't work without. However, there are some IPv4 features that we would want to have duplicated for IPv6:
addrwatch
.
We don't actually do this for IPv4 at the moment, but it would be
really good if we could restrict RA to trusted systems. It looks as though
ra-guard does this.
If we set any IPv6 address on a switch then it becomes a target for "management" connections. We therefore have to restrict these, as we already do for IPv4.
Advertise the default route using RA from the core, and block from everything else: use ra-guard. Don't appear to be able to set preferences (fixed in later firmware versions), so that the first RA received sets the default route. This is actually better for the self-managed subnets than what they have with IPv4, as there would be router redundancy for them, as opposed to just one default route handed to them by DHCP. However, it does mean we wouldn't want to use RA on most of the Linux edge routers.
Question: what do we want to do about RA on E42 and E160?
We could perhaps just punt this to the EdLAN routers??
Alternatively, BIRD (at least) can do RA so that's what we've done.
OSPFv3:
Note that the security model for OSPFv3 for IPv6 is rather different from OSPFv2 for IPv4. This will need a some thought and development.
OSPFv3 or BGP to speak to EdLAN?? To be decided. OSPFv3 would
probably be simpler, while BGP would offer more control.
We're using OSPFv3. It's up and running!
This
might well become more important, with the absence of RFC1918-type
addresses. There is already some basic IPv6 support in lcfg-iptables,
though, and in principle it would "just" be a case of modifying the
existing scripts and fragments to cover the IPv6 cases.
Done: turned out to be straightforward.
We would definitely need this before we could put IPv6 on the
self-managed and managed-Windows subnets. DICE machines mostly use
DHCP only at installation time, and could be installed over IPv4
as at present. Deferred to a separate project. SLAAC
seems to work, mostly.
OpenVPN's support for IPv6 is coming, but isn't really mature yet. It might be that it would offer enough for our target use pattern (road-warriors connecting back home), but some more detailed investigation will be required at some point.
We may also need a way to assign addresses on non-VLAN subnets. This case arises where we have split an IPv4 /24 into /25 or even /26. As these are routed directly by the endpoints there's no need to assign a VLAN tag for any of them. However, for IPv6 we would need a chunk of global address space to use in place of a "tag" block.
(See also the separate document on IPv6 unicast addresses.)
Managed DICE machines will need to know how to set their static IPv6 address(es). This may just be a case of extending lcfg-network to be able to acquire the correct values to drop into the configurations. SLAAC seems to work pretty well except where a CO-defined address is really required.
In addition to any statically-configured IPv6 address,
(SL6 at least) DICE machines will autoconfigure a global one with the same
interface number as they use for their autoconfigured link-local address.
(If we don't have this turned on then it looks as though they won't
pick up a default route through RA, so we'll just have to live with it.) It
doesn't appear to be problematic (so far). There's an
ifcfg-file switch which seems to control this, should we want to.
Windows and Macs will probably just acquire and use an IPv6 address through DHCP, once we have it set up. It should Just Work.
The tardis people have asked on and off for a while about IPv6. They currently have 193.62.81/24 which is routed directly by EdLAN for them. As we're proposing moving them behind our edge anyway, it would make more sense for their IPv6 allocation to be done through us, most likely by them getting a handful of "non-VLAN" blocks of EdLAN space.
As soon as we turn on IPv6 on at least some of our subnets, the machines on them will likely immediately start to use it. So we need to sequence the way we test and enable things (strike out as done):
march
on S33. ping6 between
them works.
rfe dns/inf6Forward and reverse zones are now carried on all our nameservers, and have been delegated from above.
live/netinf-macros.h
which can set up a static address for
a VLAN/subnet.
The following were originally bundled into this project, but it now appears that it would make more sense for them to be done separately, either as their own individual projects or as operational or miscellaneous small-scale development. They are retained here for the record.
This is a list of niggles that we have spotted along the way, in no particular order:
rsync is inclined to prefer IPv6 if it can use it, and if a machine uses a global dynamic address then it likely won't match against this "allow" list if forward/reverse matching is in place unless there's a %slaac entry for the hostname. A workaround may be to use the "-4" flag to tell rsync to prefer IPv4. In principle it would be possible to automatically populate the appropriate reverse DNS zones with all of the expected addresses and names, though more investigation would be required to determine whether this would really be desirable in general. Answer: it depends! For desktops it is, and we've done it. For servers it's better to leave it to the machines' managers.
The network machines now all set it to "any", with no bad effects so far. Is this just historical? It was. Now removed.