CCIE - EI: 3.2 DMVPN 📝

2022-04-29 · Topic: CCIE-EI

This is a summary of the notes I’ve written for CCIE-EI - CCIE - EI: 3.2 DMVPN. In other words, this only contains what I felt the need to write down and is not meant as a complete study resource. Please see the study resources I’ve used or related blogs for more coherent writeups.

3.2 - DMVPN

Docs: Support > Products & Downloads > Networking Software (IOS and NX-OS) > IOS-XE 16 > Configuration Guides > 16.12.1 > Dynamic Multipoint VPN

  • Phase 1
    • No spoke-spoke communication
    • Summarization/filtering on hub
  • Phase 2
    • Spoke-spoke communitcation through NHRP requests only
    • Next-hop addresses must be intact for spoke-spoke communication
    • No filtering/summarization
    • Hub briefly used in spoke-spoke data-plane until NHRP request has been received
  • Phase 3
    • Spoke-spoke communication, through NHRP requests/redirection.
    • Filtering/summarization allowed
    • NHRP redirect to packet-source when ingress and egress interface is equal.
    • Hub briefly used in spoke-spoke data-plane until NHRP request/redirect has been received.
    • Hierarchical DMVPN is supported without hub daisy-chaining.

DMVPN Basics

Basic configuration

Hub configuration

int tun 0
 ! Basic settings ip address {}
 tunnel source {int}
 tunnel mode multipoint-gre

 ! Ensures fragmentation/reassembly happens on the endpoints.
 ip mtu 1400
 ip tcp adjust-mss 1360

 ! Minimum viable NHRP config
 ip nhrp map multicast dynamic
 ip nhrp network-id
 ip nhrp hold-time
 
 ! Enables redirection for phase 3 spoke-spoke redirection
 ip nhrp redirect

 ! If multiple DMVPN tunnels exist from the same interface
 tunnel key {num}

 ! IPsec
 tunnel protection ipsec profile {}

Spoke configuration

int tun 0 
 ! Basic settings
 ip address {}
 tunnel source {int}
 tunnel mode multipoint-gre

 ! Ensures fragmentation/reassembly happens on the endpoints.
 ip mtu 1400
 ip tcp adjust-mss 1360

 ! Minimum viable NHRP config
 ip nhrp network-id
 ip nhrp map multicast {hub underlay ip} 
 ip nhrp map {overlay IP} {underlay IP}
 ip nhrp nhs {overlay IP}

 ! Enables spoke-spoke on redirection from hub
 ip nhrp shortcut

 ! If multiple DMVPN tunnels exist from the same interface
 tunnel-key {}

 ! IPsec
 tunnel protection ipsec profile {}

Dual-hub

Single cloud

Easier to configure, less flexible due to single tunnel interface being used. Single cloud configuration is generally not recommended according to the DMVPN design-guide.

Primary hub is configured normally. The secondary hub is configured with ip nhrp nhs {addr} multicast to statically become a client of the primary.

Spoke configuration for dual-hub single DMVPN contains two ip nhrp nhs commands.

NHS clustering:
Next-hop-servers can be prioritized and grouped into clusters to achieve desired redundancy and load-distribution. An NHS can be assigned a priority and optionally added to a cluster with ip nhrp nhs {ip} priority {0-255} [cluster {num}] where the lowest priority is considered best. ip nhrp nhs cluster {num} max-connections {num} can be used to limit amount of tunnels/Next hop servers are in use.

Dual cloud

More config to enter, more routing flexibility, considered best-practice.

Hubs are configured as usual, with two different subnets for the tunnel interfaces. Spokes are configured with two tunnel interfaces, one for each hub/DMVPN cloud.

Front-door VRF

When a front-door VRF is used you can keep routing in the underlay separated from routing in the overlay. The main purpose of this is to allow for default-routing to the hub with phase 3 DMVPN without conflicting with the default route to the internet(NBMA/Underlay).

Without tunnel protection

  1. Add the tunnel-source interface a FVRF with vrf forwarding {fvrf-name}.
  2. Configure the VRF to establish tunnel through with tunnel vrf {fvrf-name} on the tunnel interface.
  3. Optionally add the tunnel interface to an VRF with vrf forwarding {vrf-name}.

With tunnel protection

IKEv1

crypto keyring {name} vrf {fvrf} ! This is the only differing configuration
 pre-shared-key address {network} {wildcard} key {key}
crypto isakmp policy {policy-name}
 encryption aes
 hash md5
 group {N}
 authentication pre-share
crypto ipsec transform-set {ts-name} esp-aes esp-sha-hmac
 mode transport
crypto ipsec profile {profile-name}
 set transform-set {ts-name}

IKEv2

crypto ikev2 policy {pol-name}
 match fvrf {fvrf}
crypto ikev2 profile {prof-name}
 match fvrf {fvrf}

Note: Cisco has no good documentation for this and the debug messages are useless. If you get stuck, use the’VRF-Aware IPsec' config-guide and do your best to remember the rest.

Routing protocols

While most routing protocols are supported EIGRP or BGP is highly recommended.

OSPF

Summarization is not supported which limits scalability(without this beautiful hack that is). Greater scalability can be acheived through using an NSSA for the DMVPN overlay network + regular OSPF tuning. Configuring the DMVPN overlay area as a NSSA requires area 0 to exist somewhere else on the hub router.

Phase 2:
The broadcast network type must be used. All spokes must be configured with ip ospf priority 0 to remove them from DR election.

Phase 3:
The point-to-mutlipoint network type is supported and should be preferred(no DR election). NHRP redirects allows the use of the point-to-multipoint network type as the advertising router(hub) will be the next-hop for all routes.

Summarization:

  1. Using a total NSSA area for neighbors in the DMVPN overlay
  2. Adding a static default route to the hub on all spokes
  3. Filtering all outbound routes on the hub.

All credit for how to summarize OSPF in DMVPN goes to the author of this article. I have always accepted “no summarization with OSPF in DMVPN” as the absolute truth, I am very impressed by his creativity! I really hope this isn’t implemented in production anywhere though…

EIGRP

Split-horizon and next-hop-self must be disabled whenver non-summary routes are used.

Multi-pathing:
The hub must be configured with add-paths and have all ECMP routes in the RIB with a variance of 1. The no next-hop-self no-ecmp-mode should be used to ensure that the next-hop is indeed kept intact for all routes when the hub router is multi-homed.

BGP

BGP gives you great control over which routes are advertised to spokes.

Dynamic neighbors should be used, which means the ASN mustn’t match on the spoke routers. The DMVPN hub(s) should be route reflectors should be used whenever a default-route isn’t used.

iBGP leaves less room for error and should be preferred(simpler configuration, AS_PATH loop prevention is left intact).

eBGP with default route:
Advertise networks “as usual”. The next-hop-self and AS_PATH loop prevention can be left to defaults. Routes from spokes are never advertised to another spoke but are instead installed as next-hop-override routes.

eBGP without summarization: no neighbor {} next-hop-self must be configured for spoke-spoke communitcation to occur. If spokes share an ASN the loop-prevention must be avoided through as-override or allowas-in. The hub should be a route-reflector for scalability.

iBGP:
The hub needs to be a route-reflector, as iBGP peers can’t readvertise iBGP learned routes by default.

Bidirectional Forwarding Detection

Likely outside the scope of the blueprint.

NHRP only reacts to BFD down events, greatly reducing detection-time in non-Crypto deployments(detection with Crypto happens immedeately when IPsec fails).

BFD for NHRP is achieved through regular BFD interface configuration(See 1.2 routing concepts - bidirectional forwarding detection(not yet published))

CTS in DMVPN

Not explicitly mentioned in the blueprint, but could be relevant for Cisco SDA border handoff configuration.

This is my enterpretation of a pretty vague configuration guide and should not be relied on as a source. I have not labbed this through outside of verifying that cts sgt inline works for the SDA border handoff.

Inline CTS tagging can be achieved through IKEv2 or “regular” inline tagging, the last of the two is preferred in most cases. Configuring both on a single router results in double tagging.

IKEv2
When IKEv2 is used for CTS, the routers negotiate CTS capability(as a Vendor ID payload) during tunnel bring-up. If negotiation succeeds the SGT values are added as a IPsec Cisco Meta Data(CMD) payload.

IKEv2 based CTS tagging must be configured with the crypto ikev2 cts sgt global command on all hubs + spokes. Fragmentation after IPsec encryption should be avoided as only the first fragment will be tagged. FlexVPN configuration is listed as not supported in the config-guide despite also using IKEv2. The config-guide seems to suggest that IKEv2 based CTS tagging only works in tunnel mode, but I have not verified that.

Regular tagging
All routers participating in DMVPN must be configured with the cts sgt inline interface command. It supports all variations of DMVPN, except for running MPLS over it.

3.2.a Troubleshoot DMVPN Phase 3 with dual-hub

3.2.a i NHRP

The main function of NHRP is to maintain underlay/overlay mappings and handle queries/resolution for this.

AD: 250
Only ever used for next-hop-override routes in phase 3 DMVPN.

Metric: 255
The metric in NHRP is used for performing egress load-balancing(ECMP only by default). This can either be altered manually with ip nhrp path preference or configured to match the IGP metric. This is likely out of scope of the blueprint and not something I will prioritize.

Hold time: 7200s default, 600s reccommended.
Configured with ip nhrp holdtime, defines how long the NHS will keep spoke mappings.
Registrations: 2/3 of the hold-time. ip nhrp registration timeout, defines how long a spoke will wait to re-register with the hub.

The NHRP timers result in a potentially awful convergence time, BFD is reccomended to improve detection of the hub going down when not using IPsec. Routing-protocol convergence will likely happen before NHRP reacts, meaning that the slow convergence can be a non-issue for dual-hub configurations.

ip nhrp interest can be used to limit which traffic will generate a NHRP resolution request, thus controlling which spokes a can form tunnels directly to.

Messages

  • Registration request & reply:
    Register spoke overlay and underlay addresses + NHRP group.
  • Resolution request & reply:
    Query for overlay -> underlay mapping.
  • Redirect:
    Initated by hub to install next-hop-override route on spokes.

Phase 2 & Phase 3 wihtout summarization

  1. Next-hop address of a route is for a spoke router
  2. Sending router starts sending packets to the hub
  3. Spoke sends resolution request to find underlay address of spoke
  4. Hub responds with overlay-underlay mapping for said spoke
  5. Sending spoke initiates tunnel to receiving spoke and stops forwarding traffic to the hub.

Phase 3 - With summarization

  1. Packets are sent to the hub according to the RIB on the spoke
  2. Hub forwards packets to receiving spoke
  3. Hub sends NHRP redirection containing with most specific route in the hub RIB
  4. Sending spoke router installs route as a next-hop-override(H) route in RIB
  5. Sending spoke initiates tunnel to receiving spoke and stops forwarding traffic to the hub.

3.2.a ii IPsec/IKEv2 using pre-shared key

IKEv1 configuration

! Avoid using legacy 'crypto isakmp key' configuration.
crypto keyring {name} [vrf {vrf-name}]
 pre-shared-key address 0.0.0.0 0.0.0.0 key {password} 

crypto isakmp policy {policy-name}
 encryption aes
 authentication pre-share
 group N

! IPsec settings
crypto ipsec transform-set {ts-name} esp-aes esp-sha-hmac
 mode transport

crypto ipsec profile {profile-name}
 set transform-set {ts-name}

IKEv2 configuration

crypto ikev2 keyring {name}
 peer {name}
  address {}
  pre-shared-key {}

crypto ikev2 profile {name}
 keyring {name}
 authentication local pre-share
 authentication remote pre-share
 match address local {IP}
 match identity remote address {IP} {mask}

! TS config not strictly neccessary, defaults are used if omitted.
crypto ipsec transform-set {name} 
 mode transport

! IPsec configuration
crypto ipsec profile {name}
 set ikev2-profile {name} 

Practical note:
The tunnel interface will enter the “reset” state if the local IP is added to a static mapping and configured as an NHS when tunnel protection is in use. This is due to IKE attempting to establish an SA with itself. This is not unlikely to happen if you are copy-pasting tunnel interface configuration between hubs to save time.

3.2.a iii Per-Tunnel QoS

Per-tunnel QoS is applied per-tunnel as the name suggests and is configured by mapping service-policies to NHRP groups.

Gotchas

  • Requires CEF
  • Egress direction only
  • All queueing and shaping happens after cryptography
  • MPLS only supported with 2547oDMVPN

NHRP Groups
Group membership is defined on the spoke routers and advertised to the hub in registration requests for hub-spoke traffic. Configuring a group does not automatically send a registration request and will require flapping the tunnel interface or waiting for a periodic registration.

For spoke-spoke per-tunnel QoS “Vendor Private Extension” is used to advertise group-membership. The VPE is sent as a part of the resolution request message and must be configured separately from the “regular” NHRP group with ip nhrp attribute {name}.

QoS considerations
Queing and shaping happens after cryptography on the outbound interface of the router taking the GRE, IPsec and L2 headers/trailer into account. Fair-queuing isn’t reccommended due to being based on the outer header, hence putting all traffic in a single queue. Along with per-tunnel QoS a service-policy for the default class can be applied to the physical interface OR a subinterface, there are a bunch of caveats to this(see the docs if needed )

Configuration

! Spoke
interface tunnel {n}
 ip nhrp group {group-name}

! Hub
interface tunnel {n}
 ip nhrp map group {group-name} service-policy output {policy-name}

! Spoke, with configuration for spoke-spoke tunnel QoS.
interface tunnel {n}
 ip nhrp group {group-name}
 ip nhrp attribute {group-name} ! Does not have to match ip nhrp group name
 ip nhrp map group {group-name} service-policy output {policy-name}

Verification/Troubleshooting

! Show NHRP clients with associated groups 
show ip nhrp 

! Show NHRP group-mappings with list of spokes and service-policy name
show ip nhrp group

! Shows all group to policy mappings + which tunnels the QoS policies apply
show ip nhrp group-map {name}

! QoS policy details for multipoint tunnel interface
show policy-map multipoint tunnel {n}

3.2.b Identify use-cases for FlexVPN

  • Unified way of configuring standards-based VPNs in IOS with IKEv2.
  • Dynamic interfaces uses virtual-templates, which keeps configuration neat.
  • Allows asymmetric authentication(pki one side, pre-shared on the other).
  • Supports peer-policy for spoke-spoke(DMVPN)
  • QoS granularity is improved over legacy IKEv2 configuration

Components:

  • Proposal, settings related to protecting IKE_SA_INIT
  • Policy, matches a proposal to traffic.
  • Credential store, key-ring, in profile or PKI trustpoint.
  • Profile,

FlexVPN defaults to the strongest supported parameters, can be viewed(and used as a template!) with show crypto ikev2 {component} default.

3.2.b i Site-to-site, Server, Client, Spoke-to-Spoke

3.2.b ii IPsec/IKEv2 using pre-shared key

3.2.b iii MPLS over FlexVPN

Reduces number of IPsec encrypted tunnels required for multiple VRFs while keeping traffic separated.

Troubleshooting

! DMVPN one-stop-shop
show dmvpn ! Quick DMVPN verification 
debug dmvpn {event|detail|all all}

! NHRP 
show ip nhrp 
show ip nhrp summary
debug nhrp *

! Tunnel interface
show int tunnel {}
debug int tunnel {}
debug tunnel

! Crypto 
! I seem to find most success going top-down with IPsec troubleshooting.
! Find the highest level that seems to function and start from there.
! IPsec -> IKEv1 -> ISAKMP -> Interface ~OR~ IPsec -> IKEv2 -> Interface

! IKEv1
show crypto ipsec sa
show crypto ikev1 sa
show crypto isakmp

! IKEv2
show crypto ipsec sa
show crypto ikev2 sa

Study resources

The DMVPN Section of the INE CCIE Enterprise infrastructure learning track is a good starting-point. Though I wouldn’t rely on it as my only study source.

Books used, ranked by most value for time spent:

The CCIE Enterprise Infrastructure Foundation book by Narbik Kocharians hasn’t been released at the time of writing this, but i suspect it will also be a very good resource for the EI.

I have also used the IOS XE 16.2.x configuration guide extensively.

Various links I’ve found useful:


Got feedback or a question?
Feel free to contact me at hello@torbjorn.dev