Catlyst 8000v cEdge support in EVE-ng 🧭🥼

2023-09-28 · Series: None · Tags: SD-WAN, EVE-ng, cEdge

When you spend copious amounts of time labbing it quickly becomes tedious to reset lab devices manually. Especially so when you have multiple config-sets for labbing different technologies and scenarios. This is enough reason alone to make EVE-ng the superior network labbing platform with its startup-configs feature.

EVE-ng does however not support config import/export for all nodes out of the box. After resetting the cEdges in my lab to my base config N times i thought to myself “This is just a c8000v in a different mode, adding config export/import should be a quick fix.”. I was wrong.

TL;DR

The EVE-ng template file

The first step in making/modifying an EVE-ng node is to understand the template file. All EVE-ng nodes are defined in template files under /opt/unetlab/html/templates and look similar to this:

---
type: qemu
config_script: config_csr1000v.py
prep: prep_c8000v.sh
cstart: -cdrom config.iso
name: C8K
description: Cisco Catalyst 8000v
cpulimit: 1
icon: CSRv1000.png
cpu: 2
ram: 4096
ethernet: 4
eth_format: Gi{1}
console: telnet
qemu_arch: x86_64
qemu_version: 4.1.0
qemu_nic: vmxnet3
qemu_options: -machine type=pc,accel=kvm -cpu host -serial mon:stdio -nographic -no-user-config -nodefaults -rtc base=utc
...

Source

The template file contains everything EVE-ng needs to know to be able to deploy a node. Both in terms of how it should be presented in the ui and how to spin up the image. We can break the template files into 3 portions:

Metadata options

name: C8K
description: Cisco Catalyst 8000v
icon: CSRv1000.png
eth_format: Gi{1}
console: telnet

These options all have to do with how you interact with the nodes. The name and description fields don’t need much explaining. The icon field here reference image files in /opt/unetlab/html/images/icons/. The eth_format field determines what names will be used in labels on links in the UI.

The console field determines how you connect to the node. Your options here are telnet, vnc, rdp and rdp-tls. Your image must support console access by serial for telnet to work. RDP(-tls) requires that your node runs RDP. VNC works with any node that has a display output, but is lacking in featues compared to RDP for graphical GUI nodes and telnet for CLI based nodes.

Platform options

type: qemu
cpulimit: 1
cpu: 2
ram: 4096
ethernet: 4
qemu_arch: x86_64
qemu_version: 4.1.0
qemu_nic: vmxnet3
qemu_options: -machine type=pc,accel=kvm -cpu host -serial mon:stdio -nographic -no-user-config -nodefaults -rtc base=utc

EVE-ng currently has two node types: QEMU and Docker. All standard nodes except the docker node type are QEMU nodes. The first 4 fields define the resources available to the nodes and are used for both node types. The remaining options are QEMU virtualisation specific. qemu_arch will almost always be x86_64 at this point. The qemu_nic should be set to vmxnet3 if your node accepts it. If you face issues with vmxnet3 you can try the legacy inteface type “e1000”.

If you are starting from a blank canvas I would suggest setting qemu_version to the latest available on tee system(6.0.0 at the time of writing). By choosing something recent you will both have more features available and your node will hopefully be supported longer without modifications. One thing to note regarding the qemu version is that the options change between versions. Your qemu 1.3.1 node is likely to not work as intended using qemu 6.0.0 without updating the qemu_options.

The qemu_options field defines which options should be appended to the qemu command used to run the node. You only need to put “non standard” things in this field. EVE will mount any ISO named cdrom.iso and all QCOW2 images named virtioX.qcow2 in the image folder without this being specified in the qemu_options field.

Here is an example of what QEMU options EVE will run with an empty qemu_options:

-nographic -device vmxnet3,netdev=net0,mac=50:01:00:0d:00:00 -netdev tap,id=net0,ifname=vun003000100d00,script=no -device vmxnet3,netdev=net1,mac=50:01:00:0d:00:01 -netdev tap,id=net1,ifname=vun003000100d01,script=no -device vmxnet3,netdev=net2,mac=50:01:00:0d:00:02 -netdev tap,id=net2,ifname=vun003000100d02,script=no -device vmxnet3,netdev=net3,mac=50:01:00:0d:00:03 -netdev tap,id=net3,ifname=vun003000100d03,script=no -smp 2 -m 4096 -name C8K -uuid cd9df56e-4fb0-43d9-90f4-9aeb131f2f67 -qmp unix:./qmp-sock,server,nowait  -monitor unix:./mon-sock,server,nowait  -monitor unix:./mon2-sock,server,nowait  -drive file=virtioa.qcow2,if=virtio,bus=0,unit=0,cache=none

You can find all available options in the QEMU documentation. If you are having issues getting a node to run as intended with a qemu option you can check EVE logs with tail -n 200 -f /opt/unetlab/data/Logs/unl_wrapper.txt. You can also spin up your image in Virt-manager, add the devices/options you need and check which options it uses by running ps -ef | grep qemu-system-x86 (source)

Configuration options

config_script: config_csr1000v.py
cstart: -cdrom config.iso 
prep: prep_c8000v.sh

This is the fun/difficult part of the template.

config_script references a python script under /opt/unetlab/config_scripts/ used for importing and exporting configuration to/from the nodes. I will cover this in more depth under Exporting the config.

prep references scripts under /opt/unetlab/config_scripts/ used to execute preparatory tasks before booting from a startup-config. For the c8000v it is used to prepare the config.iso referenced in cstart. This will be covered further under Loading the config.

cstart contains qemu options to be added upon starting the node when loading from a new startup-config. In this case it mounts a CD containing the startup config in a file named iosxe_config.txt.

Autonomous vs Controller mode

The cEdge is the replacement of the old Viptela vEdge. In true Cisco fashion this means strapping the newly aquired features onto an existing platform. I do like the cEdge hardware better than the old vEdges, but the software feels less purpose built and is rougher around the edges IMO.

Boot mode

The c8000v boots into autonomous mode by default and doesn’t have a SD-WAN only image like the CSR1000v has. In the install and upgrade guide there are a few useful bits of information regarding boot modes. Mainly:

If ciscosdwan.cfg or ciscosdwan_cloud_init.cfg bootstrap file in a plugged in the bootstrap location, mode change to controller mode is initiated.

For software devices … use the bootstrap file ciscosdwan_cloud_init.cfg. This file has OTP but no UUID validation.

The following fields must be present in the ciscosdwan_cloud_init.cfg bootstrap file:

  • otp
  • uuid
  • vbond
  • org

After a bit of trial and error i discovered that this is the most minmal file bootstrap file that causes the device to boot into controller mode:

#cloud-config
vinitparam:
 - uuid : C8K-00000000-0000-0000-0000-000000000000
 - otp : 00000000000000000000000000000000
 - vbond : 203.0.113.1
 - org : eve-lab

At this point I just added this config file to the virtioa.qcow2 image to have a working cEdge image available. This proved to be an issue later as the bootstrap file location has an order of precedence and would load the file on a “first-match” basis. If I added it to the bootflash it would never attempt to load it from USB or CDROM.

Configuration

Autonomous mode offers the classic Cisco CLI that we all know and love. You enter configuration-mode with “configure terminal” and you’re good to go. You can fetch the config by running show running-config or more system:running-config, and you can load it to the device by replacing the startup-config file on the bootflash or using a iosxe_config.txt file.

In controller mode you enter configuration mode with config-transaction, which gives you a viptela-ish CLI to bootstrap your device with. If you attempt to fetch the config with show running-config you will only be presented with the “regular” IOS-XE part of the config. To get the full config you need to run show sdwan running-config.

The good old flash:/startup-config file has been replaced and it is no longer as easy as copying in the config file to load it*. Loading a startup configuration onto the c8000v in controller mode must hence either be done through the CLI or with a ciscosdwan_cloud_init.cfg bootstrap file.

* I can’t seem find any good documentation on this. I have at least not been able to locate any such file. Please do correct me if it’s not correct.

Dissecting the bootstrap file

The ciscosdwan_cloud_init.cfg file is a cloud-init multipart MIME archive containing two configuration files. This is usually generated by vManage and contains the information required to bootstrap the router.

The two config files in the MIME multipart file are named cloud-config and cloud-boothook. Cloud-config allows you to define common configuration items in YAML format. The cloud-boothook is run immediately after boot and usually contains a configuration script. For the c8000v it contains the full device configuration.

Example simplified ciscosdwan_cloud_init.cfg file:

Content-Type: multipart/mixed; boundary="===============1234567890101112131=="
MIME-Version: 1.0

--===============6473818395825475233==
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="tmp9gvpskn3"

#cloud-config
vinitparam:
 - uuid : C8K-AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE
 - otp : 00000000000000000000000000000000
 - vbond : 203.0.113.1
 - org : eve-ng
 - rcc : true
ca-certs:
  remove-defaults: false
  trusted:
  - |
   -----BEGIN CERTIFICATE-----
   Redacted 
   -----END CERTIFICATE-----
   ....

--===============1234567890101112131==
Content-Type: text/cloud-boothook; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="config-C8K-AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE.txt"

#cloud-boothook
  { device configuration }

--===============1234567890101112131==--

When we wish to load a vManage created ciscosdwan_cloud_init.cfg we can simply add the file without any modifications. To load our arbitrary configurations fetched from devices we will need to generate the bootstrap file ourselves. The proper way to do this would be to use cloud-init make-mime but that would require us to install some dependencies. I hence opted to simply insert the whole scraped config into the second to last line of the stripped down cloud-init MIME file below using sed.

Content-Type: multipart/mixed; boundary="==============u7fnxr6d=============="
MIME-Version: 1.0
--==============u7fnxr6d==============
#cloud-config
vinitparam:
 - uuid : C8K-00000000-0000-0000-0000-000000000000
 - otp : 00000000000000000000000000000000
 - vbond : 203.0.113.1
 - org : eve-lab
 - rcc : true
ca-certs:
  remove-defaults: false
--==============u7fnxr6d==============
#cloud-boothook
--==============u7fnxr6d==============--

Exporting the config

Configuation export is handled by the config_script specified in the template file. These scripts interact with the CLI of the devices to fetch the config automatically. The scripts take a few inputs when executed: action(get or put), file and portnumber. To test a configuration export manually you can run {config_script} -a get -f {filename} -p {portnumber}.

Since the scripts all assume that the devices are available on a port on localhost you will have to be creative to test them locally. I found the easiest way to test the scripts from my laptop to be SSH port forwarding the device ports on my eve server to localhost. This can be done with ssh -L 127.0.0.1:{local port}:127.0.0.1:{eve-device-port} {your-eve-server}.

To avoid having to reinvent the wheel I chose to modify the csr1000v config script instead of starting from scratch. It covers 95% of what we need for the c8000v in controller mode. This script is a python “expect script” that reads the terminal output and makes decisions based on the characters observed. The main parts we need to alter is the config scraping itself and the logic for how to handle different prompts.

In the prompt handling logic we needed to accommodate a few differences:

  • Controller mode doesn’t have an “initial configuration mode”
  • Controller mode has a prompt to save uncommited changes when leaving config mode.
  • You are forced to set a password upon first login

They are all solved in a very similar fashion in the node_login function. In the root of the function I removed the handling of “initial configuration” mode and added the option for uncommitted changes found. The outcome of the check is then used in a conditional which determines how to proceed. These are the relevant lines from the script for handling the “uncommited changes” prompt.

try:
    handler.sendline('\r\n')
    i = handler.expect([
        'Username:',
        '\(config',
        'Uncommitted changes found',
        '>',
        '#'], timeout = 5)
except:
    i = -1
    if i == 1:
    # .... redacted for brevity
    elif i == 2:
            handler.sendline('no')
            try:
                handler.expect('#')
            except:
                print('ERROR: error waiting for "#" prompt after not committing changes.')
                node_quit(handler)
                return False
            return True

The code that fetches the config needed some changes to the command executed and the processing of the output. Changing the command itself was a quick fix. Filtering the output using regex substition should’ve be simple. But it turned out to be a challenge to match lines of output while in regex multiline mode. Using the pythex website with multine and dotall enabled made testing different regex expressions much simpler.

def config_get(handler):
    # Redacted for brevity 

    # Getting the config
    handler.sendline('show sdwan running-config')
    try:
        handler.expect('#', timeout = longtimeout)
    except:
        print('ERROR: error waiting for "#" prompt.')
        node_quit(handler)
        return False
    config = handler.before.decode()
    
    # Manipulating the config
    config = re.sub('\r', '', config, flags=re.DOTALL)                                              # Unix style
    config = re.sub('.*show sdwan running-config\n', '', config, flags=re.DOTALL)                   # Header
    config = re.sub('\n\*.{20,22}%SEC_LOGIN-5-LOGIN_SUCCESS.*?\n', '', config, flags=re.DOTALL)     # Login log-message
    config = re.sub('\n(?!.*\n).*', '', config, flags=re.DOTALL)                                    # Footer

Loading the config

At this point most of the hard work is done. We have figured out how to generate the bootstrap config and how we can mount it to the VM. What remains is the “glue” that allows this to function as intended from the EVE-ui.

Necessary alterations

For the controller mode c8000v to accept any config it needs the system configuration line and a valid authentication configuration. Eve also needs a set of valid credentials to be able to log in and fetch the config automatically. We need to accommodate this in two places: the default startup config and in the prep script.

The minimal config for our startup medium thus looks like this:

system
hostname Router
username admin privilege 15 secret admin
username eveconfigscraper privilege 15 secret eveconfigscraper
aaa authentication enable default enable
aaa authentication login default local
aaa authorization console
aaa authorization exec default local
login on-success log

By injecting the config scraper user in the prep script the user will always be included regardless of the loaded startup config. We can achieve this in the prep script using sed:

if ! grep -q "username eveconfigscraper" $1/startup-config; then 
	sed -i '0,/username \w/s//username eveconfigscraper privilege 15 secret eveconfigscraper\n&/' $1/startup-config 
fi

Generating & handling the bootstrap medium

The ciscosdwan_cloud_init.cfg bootstrap file serves 2 purposes for our c8000vcm node. It enables it to boot directly into controller mode and it allows us to feed it a config. For our image to work as intended we hence need the bootstrap file to always be present.

Using an ISO turned out to be the most practical way to generate and mount the bootstrap config file. Generating our default boot medium with our minimal config from earlier can be achieved with a simple bash script:

# Create minimal cloud init config to make router boot 
cat << EOF > ciscosdwan_cloud_init.cfg
Content-Type: multipart/mixed; boundary="==============u7fnxr6d=============="
MIME-Version: 1.0
--==============u7fnxr6d==============
#cloud-config
vinitparam:
 - uuid : C8K-00000000-0000-0000-0000-000000000000
 - otp : 00000000000000000000000000000000
 - vbond : 0.0.0.0
 - org : null
--==============u7fnxr6d==============
#cloud-boothook
system
hostname Router
username admin privilege 15 secret admin
username eveconfigscraper privilege 15 secret eveconfigscraper
aaa authentication enable default enable
aaa authentication login default local
aaa authorization console
aaa authorization exec default local
login on-success log
--==============u7fnxr6d==============--
EOF

# Generate iso file containing config file 
mkisofs -o config.iso -l --iso-level 2 ciscosdwan_cloud_init.cfg
rm ciscosdwan_cloud_init.cfg

Our prep script needs to be able to accommodate both exported configs and complete bootstrap files. We can determine how to handle the startup-config by checking for the presence of the “MIME-Version:” string. While this check is not completely foolproof I think it will suffice.

Using our previous findings we can finally put together our completed prep script:

# Insert eve-ng configscraper user
if ! grep -q "username eveconfigscraper" $1/startup-config; then 
	sed -i '0,/username \w/s//username eveconfigscraper privilege 15 secret eveconfigscraper\n&/' $1/startup-config 
fi

# Check if this is a bootstrap config or a config fetched from the device directly
if grep -q "MIME-Version:" $1/startup-config; then
	cat $1/startup-config > $1/ciscosdwan_cloud_init.cfg 		
else
	# Populate config file with minimal contents to be valid
	cat <<- EOF > $1/ciscosdwan_cloud_init.cfg
	Content-Type: multipart/mixed; boundary="===================================="
	MIME-Version: 1.0
	--====================================
	#cloud-config
	vinitparam:
	 - uuid : C8K-00000000-0000-0000-0000-000000000000
	 - otp : 00000000000000000000000000000000
	 - vbond : 203.0.113.1
	 - org : eve-lab
	 - rcc : true
	ca-certs:
	  remove-defaults: false
	--====================================
	#cloud-boothook
	--====================================--
	EOF
	
	# Indent contents of startup-config and insert before last line
	sed -i 's/^/  /g' $1/startup-config
	sed -i -e "\$e cat ${1}/startup-config" $1/ciscosdwan_cloud_init.cfg
fi
 
# Generate ISO
mkisofs -o $1/config.iso -l --iso-level 2 $1/ciscosdwan_cloud_init.cfg

I discovered some behaviour when making EVE mount the ISO by naming them cdrom.iso. Eve will always load the file from it’s original path under /opt/unetlab/addons/qemu/* despite also copying it to the node directory upon node startup. Because of this I ended up naming the boot file config.iso and mounting it statically(not using cstart) using qemu_options in the template file.

With this change our final template file looks like this:

---
type: qemu
config_script: config_c8000vcm.py
prep: prep_c8000vcm.sh
name: C8K-CM
description: Cisco Catalyst 8000v CM (cEdge)
cpulimit: 1
icon: cEdge.png
cpu: 2
ram: 4096
ethernet: 4
eth_format: Gi{1}
console: telnet
qemu_arch: x86_64
qemu_version: 4.1.0
qemu_nic: vmxnet3
qemu_options: -machine type=pc,accel=kvm -cpu host -serial mon:stdio -nographic -no-user-config -nodefaults -rtc base=utc -drive file=config.iso,if=ide,index=1,media=cdrom
...

Verifying config load

The put action of the config_script is run upon config load in EVE to load configs and deterimine node status. In our case it isn’t involved in loading the config at all since this is handled with the bootstrap file. We determine the outcome/node status by checking for the log entries vip-bootstrap: All daemons up or vip-bootstrap: Error extracting config.

def config_put(handler):
    while True:
        try:
           i = handler.expect(['%IOSXE-5-PLATFORM: R0/0: vip-bootstrap: All daemons up',
               '%IOSXE-3-PLATFORM: R0/0: vip-bootstrap: Error extracting config'], timeout)
        except:
           return False
        return True

Summary

This was a fun sidetrack to wander off on, even though it took far longer than expected. As with all tools it is nice to know the details of how it functions. This turned out to be a good excersise to learn how images and nodes acutally work in EVE-ng.

Now that I am familiar with the process I think I will add support for a few other nodes(vManage). If I somehow find the time for it that is…

Final note: Since Cisco opted to use cloud-init with MIME archives, could you consider the cEdge to be configurable by email?

Installation

This process prerequisites that you have created a regular C8000v image according to the EVE-ng guide.

1. Copy the Catalyst 8000v folder to a new c8000vcm folder.

cd ~
cp -r /opt/unetlab/addons/qemu/c8000v-{version} /opt/unetlab/addons/qemu/c8000vcm-{version}

2. Clone the git repository and run install.sh.

git clone https://github.com/torbbang/eve-ng_c8000vcm c8000vcm
cd c8000vcm 
chmod +x install.sh
./install.sh

3. Copy the base config ISO into your new EVE-ng image directory from step 0.

cp config.iso /opt/unetlab/addons/qemu/c8000vcm-{version}

4. Fix permissions for the added files.

# /opt/unetlab/wrappers/unl_wrapper -a fixpermissions

5. Enjoy!

cedge

NOTE: I have since added support for exporting root certificates, the functionality will hence differ from what is explained here.


See Also

Got feedback or a question?
Feel free to contact me at hello@torbjorn.dev