Building Your Own Containerlab Node Kinds 🛠️

2025-10-26 · Series: None · Tags: containerlab, vrnetlab, networking, automation

Over the last few weeks I’ve added several new node types to containerlab to cover my own lab needs. The process has been surprisingly straightforward after I got familiar with the codebase, and I figure others probably have devices they’d like to add too. This post will guide you through the basics of creating vrnetlab-based node kinds for containerlab - it’s easier than you might think.

The node kinds I have contributed thus far are the following:

TL;DR

Vrnetlab

Vrnetlab enables us to run KVM virtual machines within containers. It handles configuration loading, status/health checks, node networking and more. The composition of the project is pretty straight forward:

vrnetlab/
├── makefile-*.include       # Shared Makefile snippets for building node kinds
├── vrnetlab-base.dockerfile # Base container image with QEMU, networking tools, and Python
├── common/                  # Shared Python modules used by all node kinds
│   ├── healthcheck.py       # Container health check logic
│   └── vrnetlab.py          # Base VM class with QEMU management
├── cisco/                   # Cisco device implementations
│   ├── c8000v/
│   ├── csr1000v/
│   ├── sdwan-components/
│   ├── vios/
│   └── ...
├── juniper/                 # Juniper device implementations
├── nokia/                   # Nokia device implementations
└── ...

Each node kind directory follows the following structure:

vrnetlab/vendor/nodetype/
├── Makefile                 # Build configuration for this node kind
├── README.md                # Documentation for building and using this device
├── docker/
│   ├── Dockerfile           # Container image definition
│   └── launch.py            # Device-specific VM launcher

Common python modules

The common/ directory contains the base VM class in vrnetlab.py. This handles QEMU process management, network interface creation, management IP configuration, serial console interaction, and config bootstrapping.

You extend this VM class and override methods for device-specific behavior. The base class handles interface generation, VM lifecycle management, and console interaction. The healthcheck.py file fetches the current health status of the nodes by checking hte contents of a /health file within the containers.

Makefiles

Each node kind has a Makefile that sets VENDOR, NAME, IMAGE_FORMAT, and IMAGE_GLOB and includes shared makefiles that handle the build process.

The three include files:

  • makefile.include - Core build targets (docker-image, docker-build-common, docker-push)
  • makefile-sanity.include - Validates registry URL and sets REGISTRY variable
  • makefile-install.include - Runs container with --install flag during build, lets launch.py do initial setup, then commits the result

Drop your vendor image in the directory and run make. The most important targets are the following:

  • docker-image - Builds containers for all matching images
  • docker-build-common - Copies vendor image and common modules into build context
  • docker-clean-build - Cleans up after build
  • VERSION - Extracts version from filename via regex

You rarely need custom build logic unless building multiple variants or handling non-standard filenames.

For multiple build variants, override docker-image. Standard version:

docker-image:
	for IMAGE in $(IMAGES); do \
		echo "Making $$IMAGE"; \
		$(MAKE) IMAGE=$$IMAGE docker-build; \
		$(MAKE) IMAGE=$$IMAGE docker-clean-build; \
	done

C8000v override for reference (builds both autonomous and controller mode):

docker-image:
	for IMAGE in $(IMAGES); do \
		echo "Building autonomous mode variant for $$IMAGE"; \
		$(MAKE) IMAGE=$$IMAGE MODE=autonomous docker-build; \
		$(MAKE) IMAGE=$$IMAGE docker-clean-build; \
		echo "Building controller mode variant for $$IMAGE"; \
		VER=$$($(MAKE) -s IMAGE=$$IMAGE print-version); \
		$(MAKE) IMAGE=$$IMAGE VERSION=controller-$$VER MODE=controller docker-build; \
		$(MAKE) IMAGE=$$IMAGE docker-clean-build; \
	done

The MODE variable gets passed to the Docker build as a build arg, which the launch.py script then uses to apply the appropriate boot parameters.

Dockerfiles

The Node Dockerfile is fairly simple since the base image does the heavy lifting. Here is the C8000v dockerfile for example:

FROM ghcr.io/srl-labs/vrnetlab-base:0.1.0

ARG VERSION
ENV VERSION=${VERSION}
ARG IMAGE
COPY $IMAGE* /
COPY *.py /

ARG MODE=autonomous
ENV MODE=${MODE}

EXPOSE 22 161/udp 830 5000 10000-10099
HEALTHCHECK CMD ["/healthcheck.py"]
ENTRYPOINT ["/launch.py"]

The vrnetlab-base image includes QEMU/KVM, networking tools, Python with uv, and the common modules. Copy in your vendor image and launch.py. VERSION and IMAGE args come from the Makefile. All you need to do is to add device-specific build args if needed and EXPOSE ports for management protocols (SSH, SNMP, NETCONF).

Launch.py

Your launch.py extends the base VM class and handles device-specific logic:

import vrnetlab

class MyDevice_vm(vrnetlab.VM):
    def __init__(self, username, password):
        # Find your disk image
        disk_image = self.find_disk_image()

        # Initialize the parent VM class
        super().__init__(username, password, disk_image=disk_image, ram=4096)

        # Set device-specific parameters
        self.num_nics = 9
        self.nic_type = "virtio-net-pci"

        # Generate bootstrap configuration
        cfg = self.gen_bootstrap_config()
        self.create_config_image(cfg)

class MyDevice(vrnetlab.VR):
    def __init__(self, username, password):
        super().__init__(username, password)
        self.vms = [MyDevice_vm(username, password)]

Key methods:

__init__() - Configure your VM. Call super().__init__() with disk image and RAM. Set num_nics, nic_type (virtio-net-pci or e1000), and QEMU args. Generate bootstrap config and create config ISO.

gen_bootstrap_config() - Generate initial config (hostname, credentials, management IP, SSH/NETCONF). Use self.mgmt_address_ipv4/ipv6 and self.mgmt_gw_ipv4/ipv6 from the base class. Return as string.

bootstrap_spin() - Called repeatedly during boot. Monitor serial console with self.con_expect() and detect boot completion. Set self.running = True when ready.

create_config_image() (optional) - Create ISO with config using genisoimage. Some devices need config on virtual CD-ROM.

Check the c8000v implementation for complex scenarios like install mode and build variants.

Workflow

Workflow:

  1. Create directory structure - Make vendor/device/docker/ subdirectory
  2. Write Makefile - Copy similar device, update VENDOR/NAME/IMAGE_FORMAT/IMAGE_GLOB
  3. Write Dockerfile - Inherit vrnetlab-base, copy image and Python files, expose ports (5-10 lines)
  4. Write launch.py - Extend vrnetlab.VM, implement device logic. Start simple, add features later
  5. Test locally - Drop vendor image in directory, run make, test with containerlab topology
  6. Iterate - Tweak bootstrap_spin() expectations, QEMU params, bootstrap config

Troubleshooting

Debugging techniques:

Serial console - Telnet to port 5000 to see what’s happening:

docker run -d --name test-device --privileged vrnetlab/device:version
telnet localhost 5000

Note that there can only be one session to the console at a given time - this includes the scrapli connection to the node during setup.

Container logs - Watch boot progress:

docker logs -f test-device

Common issues (Well, the ones I faced at least):

  • Container never healthy - Wrong bootstrap_spin() expectations. Check serial console for actual boot messages
  • VM won’t boot - Check QEMU params (CPU type, RAM, disk interface). Copy from similar devices
  • Config not applied - Verify config ISO creation. Use docker cp to extract and inspect
  • Missing interfaces - Check num_nics and nic_type compatibility
  • Version extraction fails - Test Makefile regex with make version-test

Containerlab

After building your vrnetlab container you will have to do some matching changes in containerlab. This part is Go code in the nodes/ directory:

containerlab/
├── nodes/
│   ├── node.go               # Node interface definition
│   ├── node_registry.go      # Registry for registering node kinds
│   ├── default_node.go       # Default implementation with common functionality
│   ├── c8000/                # C8000v node kind
│   │   ├── c8000.go          # Node implementation
│   │   └── c8000.cfg         # Default config template
│   ├── cisco_sdwan/          # SD-WAN components
│   ├── ceos/                 # Arista cEOS
│   └── ...

Node Interface

Embed DefaultNode and override key methods:

Init() - Set up container binds, environment variables, pre-flight checks

PreDeploy() - Generate config files before container creation. For vrnetlab, create startup config in lab directory for mounting

CheckInterfaceName() - Validate interface names match device expectations (e.g., Hu0_0_0_1 for c8000)

SaveConfig() - Pull running config via NETCONF or SSH

Node Registration

Register your node kind with credentials and platform attributes:

func Register(r *clabnodes.NodeRegistry) {
    defaultCredentials := clabnodes.NewCredentials("admin", "admin")

    platformOpts := &clabnodes.PlatformAttrs{
        ScrapliPlatformName: "cisco_iosxe",
        NapalmPlatformName:  "ios",
    }

    nrea := clabnodes.NewNodeRegistryEntryAttributes(defaultCredentials, nil, platformOpts)

    r.Register([]string{"c8000", "cisco_c8000"}, func() clabnodes.Node {
        return new(c8000)
    }, nrea)
}

Register with aliases, default credentials, and platform attributes for Scrapli/NAPALM.

Basic Node Implementation

Typical skeleton:

package mydevice

import (
    "context"
    _ "embed"
    "path/filepath"

    clabnodes "github.com/srl-labs/containerlab/nodes"
    clabtypes "github.com/srl-labs/containerlab/types"
)

var (
    kindnames = []string{"mydevice"}
    defaultCredentials = clabnodes.NewCredentials("admin", "admin")

    //go:embed mydevice.cfg
    cfgTemplate string
)

func Register(r *clabnodes.NodeRegistry) {
    nrea := clabnodes.NewNodeRegistryEntryAttributes(defaultCredentials, nil, nil)
    r.Register(kindnames, func() clabnodes.Node {
        return new(mydevice)
    }, nrea)
}

type mydevice struct {
    clabnodes.DefaultNode
}

func (n *mydevice) Init(cfg *clabtypes.NodeConfig, opts ...clabnodes.NodeOption) error {
    n.DefaultNode = *clabnodes.NewDefaultNode(n)
    n.Cfg = cfg

    for _, o := range opts {
        o(n)
    }

    // Mount startup config into the container
    n.Cfg.Binds = append(n.Cfg.Binds,
        filepath.Join(n.Cfg.LabDir, "startup.cfg") + ":/config/startup-config.cfg",
    )

    return nil
}

func (n *mydevice) PreDeploy(ctx context.Context, params *clabnodes.PreDeployParams) error {
    // Generate startup config file
    cfg := filepath.Join(n.Cfg.LabDir, "startup.cfg")
    return n.GenerateConfig(cfg, cfgTemplate)
}

Use //go:embed to embed default config template into the binary.

Workflow

  1. Create package - Make nodes/mydevice/ directory
  2. Write Go code - Create mydevice.go with node struct, Init(), PreDeploy(), Register()
  3. Register - Add to core/register.go: import at top, call yourpackage.Register(c.Reg) in RegisterNodes()
  4. Build and test - Run make build, test with topology file

The containerlab side is simpler - you’re preparing config files for mounting. The heavy lifting is in vrnetlab launch.py.

Summary

Creating a new containerlab node kind involves two parts: building the vrnetlab container and adding containerlab integration. The good news is you rarely start from scratch - just copy from a similar existing device and adapt it.

Vrnetlab side (Python):

  • Copy a similar device’s directory structure (e.g., copy cisco/csr1000v/ to cisco/yourdevice/)
  • Update the Makefile with your VENDOR, NAME, IMAGE_FORMAT, and IMAGE_GLOB
  • Copy and adapt the Dockerfile - usually just changing exposed ports
  • Copy a similar device’s launch.py and modify:
    • Adjust QEMU parameters (RAM, NIC count, NIC type)
    • Update the bootstrap config template to match your device’s syntax
    • Change the bootstrap_spin() expectations to match your device’s boot messages
  • Drop your vendor image in the directory and run make

Containerlab side (Go):

  • Copy an existing node package (e.g., nodes/csr1000v/ to nodes/yourdevice/)
  • Update the struct names, kindnames, and default credentials
  • Adjust the Init() method if you need different container binds
  • Modify the config template to match your device’s configuration syntax
  • Add your package to the registry in core/register.go:
    • Import your package at the top
    • Call yourpackage.Register(c.Reg) in the RegisterNodes() function
  • Run make build and test

The key is finding a similar device to copy from. For Cisco IOS-XE devices, copy from c8000v or csr1000v. For Juniper, copy from vjunosrouter. For Nokia, copy from sros. You’re not reinventing the wheel - you’re adapting existing patterns to your specific device. Most of your time will be spent tweaking QEMU parameters and getting the bootstrap config right, not writing boilerplate code.

Working with containerlab has been genuinely enjoyable. The codebase is well-structured and easy to navigate, and the maintainers are friendly and responsive. If you’ve got a device you wish was supported, give it a shot - it’s more approachable than it looks, and the community is always happy to help.

Have fun, and build something cool!


See Also

Got feedback or a question?
Feel free to contact me at hello@torbjorn.dev