Custom Fields on Netbox Interfaces

I’ve been playing with Netbox lately as a SSOT (Single Source of Truth) for some automation tasks I’m looking at. The problem is I need more data wrapped around Interfaces, ie: on anything layer 2, I’d like to know things like Storm Control settings, Portfast etc.

There’s been more than a few feature requests for this on the github site, but the devs have decided this isn’t really in the Netbox remit – which is understandable, this is their software overall.

2.8 introduced the Plugin architecture, so what I’m trying to achieve may be possible which that, but examples are fairly thin on the ground + I was more interested in how difficult this would be to apply to Interface objects.

Also note – I’m not a fan of hacking OSS to make it more difficult to then update.

*** CAVEAT *** – this is hack, not all functionality is tested and it make break many other things – use at your own risk. I am *not* responsible for any/all breakages.

So, how to do this:

Add in Custom Field functionality to Interface Objects

in netbox/dcim/models/


from extras.models import CustomFieldModel


@extras_features('graphs', 'export_templates', 'webhooks')
class Interface(CableTermination, ComponentModel):


@extras_features('graphs', 'export_templates', 'webhooks','custom_fields')
class Interface(CableTermination, ComponentModel, CustomFieldModel):

In class Interface() add the below function:

custom_field_values = GenericRelation(

Restarting Netbox, going to the admin page now should give us:

So lets add a test one:

Add to View/Edit Fields

In: netbox/netbox/templates/dcim/interface.html
{% include 'inc/custom_fields_panel.html' with obj=interface %}
{% include 'extras/inc/tags_panel.html' with tags=interface.tags.all %}

In: netbox/netbox/templates/dcim/interface_edit.html

add the below below the ending /div for panel-default:

{% if form.custom_fields %}
        <div class="panel panel-default">
            <div class="panel-heading"><strong>Custom Fields</strong></div>
            <div class="panel-body">
                {% render_custom_fields form %}
    {% endif %}

In: /opt/netbox/netbox/dcim/

class InterfaceForm(InterfaceCommonForm, BootstrapMixin, forms.ModelForm, CustomFieldModelForm):
class InterfaceForm(InterfaceCommonForm, BootstrapMixin, CustomFieldModelForm):

After restarting Netbox, we should now be able to view/edit our custom fields:



Showing this in the API

In: /opt/netbox/netbox/dcim/api/

class InterfaceViewSet(CableTraceMixin, ModelViewSet):


class InterfaceViewSet(CableTraceMixin, CustomFieldModelViewSet):

In: /opt/netbox/netbox/dcim/api/

class InterfaceSerializer(TaggitSerializer, ConnectedEndpointSerializer)
class InterfaceSerializer(TaggitSerializer, ConnectedEndpointSerializer,CustomFieldModelSerializer):

in class meta: (under the InterfaceSerializer Class): add ‘custom_fields’ to fields = [

Now testing using the API:

*** CAVEAT *** – this is hack, not all functionality is tested and it make break many other things – use at your own risk. I am *not* responsible for any/all breakages.

It’d be nice for this to be conditional (ie: l3 interface, specific set of custom fields, but this is possible using a l2/l3 custom field…)

Thoughts – revising our DNS Control Panel

Give or take 10 years ago, I wrote a DNS control panel for Bind. We ended up rolling this out a little internally and ultimately it saved us a shedload of time – it’s still just about there 10 years later.

Now that I’m part of something bigger, redesigning this is not something I’d ever do (sadly!) – but it recently it came up in conversation, if I was doing this now, how would I do it, so this post is just some ideas brain stormed out.

The Old System

At the time, Bind-DLZ was a little, ropey (pretty sure its dead now) and I wasn’t convinced, not sure PDNS or certainly PDNS w/MySQL backend existed, so the old system went old school.

Front End – LAMP, which made changes to a central database, with a button named ‘push’ – think of this as commit – this then pushed an entry into a table defining that job:

 mysql> describe job_definititions;
| Field           | Type         | Null | Key | Default | Extra          |
| id              | int(11)      | NO   | PRI | NULL    | auto_increment |
| hostname        | varchar(254) | NO   |     |         |                |
| type            | varchar(30)  | NO   |     |         |                |
| command         | varchar(30)  | NO   |     |         |                |
| zone_id         | int(11)      | NO   |     | 0       |                |
| status          | int(11)      | YES  |     | NULL    |                |
| updated         | int(11)      | YES  |     | NULL    |                |
| requester_email | varchar(254) | YES  |     | NULL    |                |

So basically, which server (hostname) to run on (primary or secondary), the command (CRUD) and a status.

Backend – cron jobs (PHP scripts)

The 2 DNS servers then polled via CRON the jobs table and actioned as needed – essentially writing out text files – zonedef files (the Bind zone definition) and/or the zone file itself on the primary, relying on AXFR to get to the secondary via an rndc reconfig.

Issues with that design

There’s a couple of design flaws with that – specifically – we have to allow our DNS servers access to the database – don’t like that, yes we’ve got mySQL permissions but I don’t like increasing an attack surface should 1 server be compromised – lets face it, Bind hasn’t had the best history but to be fair, which major deployed bit of software hasn’t.

How would I do this now ?

I don’t claim to know every bit of software out there, but more than likely it’d be hidden master – specifically a PDNS-MySQL master because i) it has its own API and ready-built software such as PowerAdmin to help us get the front end or at least something we can extend.

The ‘live’ servers, I’d try and use 2 vendors, but if we were using bind9, rather than write out zonedefs, I’d generate a quick CI based API which essentially would use rndc

rndc addzone  '{type slave; masters {; }; file "/etc/bind/";};'
rndc reconfig

Using this method we remove (other than open53 and an restricted API) access to any databases – as a secondary test, I’d also add a record to all zones, something like the below:

isp-test in A

Because then we can run scripts to test resolving that record – we always use serials to check zone version, but the above allows us to test a successful resolve.

Might quickly pull something together to test the theory, I do miss doing stuff like this.

Semi-Automating our labs – Connecting JunOS to Ansible

It occurred to me that there is essentially quite a lot of repetition within our labs, and as such if we rebuild we’re creating the same tasks over and over again.

Step forward: automation, or specifically ansible.

To do this, we’re not looking to ‘fully’ automate at this point. I’m still picking up ansible hence our ansible setup with grow with the blog – but being able to get some basics present would be useful.

Tweaking the ESXi Environment

  • We’re going to add a new port-group ‘Management’, using our WAN vSwitch – with the port-group assigned vlan 200.
  • Routers, R1 and R2 (as seen in the ipsla lab) have their 1st NIC moved into the Management port-group we’ve just created.

This gives us an out-of-band vlan, which in the real world would be connected via some level of console server.

Building our Ansible Server

The basic build is essentially a simple linux install:

  • Ubuntu 18.04 LTS Server
  • 16Gb Disk
  • 2 Gb RAM
  • 2x NICS – first, in our ‘Management’ vlan, the second in a natted vlan for updates

Once we’ve gotten through the build, we need to install ansible, and our ansible JunOS modules:

  • apt-get install ansible
  • apt-get install python-pip
  • apt-get install sshpass
  • apt-get install udhcpd
  • pip install ncclient
  • pip instal junos-eznc
  • ansible-galaxy install Juniper.junos
  • pip install juniper-netconify

(the latter not strictly necessary but useful for netconf && opengear)

Connecting the dots – Ansible talks to R1

Configuring our Ansible Server

At this point we need to get the Ansible server able to talk to R1, to do first, firstly we’ll edit our netplan file (/etc/netplan/50-cloud-init.yaml ) to apply to our ‘Management’ interface (ens33 in our case) and restart netplan.

            addresses: []
            dhcp4: false

Next, we need (going forward) to assign a static address to R1 via udhcpd by adding the following lines to /etc/udhcpd.conf – the mac address we find from ESXI’s NIC1 mac under the virtual machine settings.

start	#default:
end	#default:
static_lease 00:0C:29:B3:6F:39

Configuring R1

I said at the beginning, this is not zero-touch provisioning – we do need to put some management config onto R1 after running request system zeroize to wipe the configuration. Specifically, we set hostname, root, management vrf, dhcp on fxp0, enable netconf and ssh and finally create an ansible user.

  • set system host-name local-r1
  • set system root-authentication plain-text-password (password1)
  • set system management-interface
  • set interface fxp0.0 family inet dhcp
  • set system services netconf ssh
  • set system login user ansible class super-user authentication plain-text-password (ansible1)
  • commit

Verifying R1 has an address

ansible@local-r1> show interfaces fxp0.0
Logical interface fxp0.0 (Index 7) (SNMP ifIndex 13)
    Flags: Up SNMP-Traps 0x4000000 Encapsulation: ENET2
    Input packets : 702
    Output packets: 538
    Protocol inet, MTU: 1500
    Max nh cache: 100000, New hold nh limit: 100000, Curr nh cnt: 1,
    Curr new hold cnt: 0, NH drop cnt: 0
      Flags: Sendbcast-pkt-to-re, Is-Primary
      Addresses, Flags: Is-Default Is-Preferred Is-Primary
        Destination: 192.168.0/24, Local:, Broadcast:

Testing Connectivity from Ansible to R1

To test Ansible talking to R1, we need to SSH in from the Ansible Server first to get round strict host key checking, plus we’ll be using a password in this example.

We’ll need to create a directory tree similar to the below

labs# tree 
└── eem-lab
    ├── apply_common.yaml
    ├── group_vars
    │   └── eem_lab
    └── host_vars
        └── eem-lab-r1

Into group_vars/eem_lab we’ll add our standard username/password for our lab group:

ansible_user: ansible
ansible_ssh_pass: ansible1

Into host_vars/eem-lab-r1 we map our name to our ip


Finally into /etc/ansible/hosts we add a section for our eem-lab


Ansible now knows that for hosts under eem_lab, use the ansible user and pass, and also that eem-lab-r1 maps to

Our First Command

eem-lab# ansible eem_lab -m raw -a “show system uptime”

eem-lab-r1 | SUCCESS | rc=0 >>
Current time: 2019-02-11 23:36:50 UTC
Time Source:  LOCAL CLOCK 
System booted: 2019-02-11 20:38:06 UTC (02:58:44 ago)Protocols started: 2019-02-11 20:43:30 UTC (02:53:20 ago)
Last configured: 2019-02-11 21:17:33 UTC (02:19:17 ago) by root
11:36PM  up 2:59, 2 users, load averages: 0.57, 0.92, 0.83
Shared connection to closed.

Our First Playbook

Our test playbook will run 2 commands to set packet mode on the device, so lets create the file apply_common.yaml

- name: Apply Common Settings
  hosts: eem-lab-r1
  connection: local
  gather_facts: no

    - name: Set packet mode
          - delete security        
          - set security forwarding-options family mpls mode packet-based

Now run this via: ansible-playbook apply_common.yaml

eem-lab# ansible-playbook apply_common.yaml 
ansible-playbook 2.5.1
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]
Using /etc/ansible/ansible.cfg as config file

PLAYBOOK: apply_common.yaml *********************************************************************************************************************************************************************************************************
1 plays in apply_common.yaml

PLAY [Apply Common Settings] ********************************************************************************************************************************************************************************************************
META: ran handlers

TASK [Set packet mode] **************************************************************************************************************************************************************************************************************

changed: [eem-lab-r1] => {
    "changed": true, 
    "invocation": {
        "module_args": {
            "backup": false, 
            "comment": "configured by junos_config", 
            "confirm": 0, 
            "confirm_commit": false, 
            "host": null, 
            "lines": [
                "delete security", 
                "set security forwarding-options family mpls mode packet-based"
            "password": null, 
            "port": null, 
            "provider": {
                "host": null, 
                "password": null, 
                "port": null, 
                "ssh_keyfile": null, 
                "timeout": null, 
                "transport": "netconf", 
                "username": null
            "replace": null, 
            "rollback": null, 
            "src": null, 
            "src_format": null, 
            "ssh_keyfile": null, 
            "timeout": null, 
            "transport": null, 
            "update": "merge", 
            "username": null, 
            "zeroize": false
META: ran handlers
META: ran handlers

PLAY RECAP **************************************************************************************************************************************************************************************************************************
eem-lab-r1                 : ok=1    changed=1    unreachable=0    failed=0   

Voila, applied – now to rework labs to make provisioning that bit faster.

Guitar Challenge: Upgrading the Squier Affinity Strat on a budget.

I posted about the latest axe here, a 2006 Indonesian Squier Affinity Strat I bought for the princely sum of £50.

The challenge is to put some rather nice upgrades into this but ideally keep this under the price of a new Affinity Strat – which according to Andertons right now is £179.99.

That gives us £129.99 to play with.

Because these are thinner bodied strats, we’re going to leave the bridge alone for now – and concentrate on other areas.

First area, the electrics (all parts via Amazon, Pickup via Axemail):

22 Guage Cloth Covered Wire (2ft)£5.14
Graph Tech Tusq XL PT-5042£10.99
CTS 250k Log Pot (Volume)£6.29
CTS 250k Linear Pot (Tone) (x2)£12.58
TaleeMail Strap Locks£6.99
25mm Copper Shielding Tape x4m£6.21
Sprauge 0.22uF Capacitor £2.71
IronGear Texas Loco Neck Pickup£27.20

Add this up and our Affinity has cost us £128.11, which gives us room for Tuners later on, but what are getting for our money ?

  • Replacement Cap, Pots and Wiring equivalent to American Pro level
  • A new NUT, always one of the fun parts of cheaper guitars
  • Shielded Cavities to reduce hum
  • A Spanking new overwound pickup for the Neck, which if you read any reviews the IronGear feedback is always really good.

More to follow when I start putting this together 🙂

Cumulus Certifications – Interesting Move

Cumulus have recently announced they’ve introduced a certification programme – involving Linux / BGP and much Open Networking.

It’s an interesting move, and one I think I’ll look into – being a Linux/Cisco/Network-y type at heart it ticks a whole load of boxes.

If you’ve never wondered where this fits in the grand scheme of things, it appears to give a decent networking overview without getting too vendor specific – add to that whitebox is slowly making inroads into some level of L2.

For smaller operators on a budget, Bird/Quagga with some NFSen netflow analysis and additional log analysis is a fairly cheap way to self-protect your network. (hint: route injection via those Open Source projects)

Cisco-like IPSLA on JunOS

One of the more common CPE type things we use is IPSLA, often used when you want to prefer a certain circuit but there are many use cases.

On JunOS, this uses two processes for this, Real Time Performance Monitoring, and IP Monitoring, to provide an ICMP probe, and an action based on probe results.

LAB: 2x vSRX instances connected to the same port-group on vnic1, looking something akin to: [r1] — ge0/0/0.0 —- ge0/0/0.0 [r2]. On R1, we connect another interface (vnic2) as ge0/0/1.0 so we’ve got another up source interface to switch routes.

Basic Router Setup

Very simple setup to bring up R1 and R2:

  • Set a root password
  • Set Hostname
  • Switch to Packet Mode for inet (v4)
  • Set up ge-0/0/0.0 on both routers and ge0/0/0.0
  • We’ll use on R1 ge-0/0/0.0, on R2 ge-0/0/0.0 and on R1 ge-0/0/1.0
  • We’ll set a static route on R1 for 0/0 via (R2)

R1 Config

set system root-authentication plain-text-password
set system host-name r1
delete security
set security forwarding-options family mpls mode packet-based
set interfaces ge-0/0/0.0 family inet address
set interfaces ge-0/0/1.0 family inet address
set routing-options static route next-hop

R2 Config

set system root-authentication plain-text-password
set system host-name r2
delete security
set security forwarding-options family mpls mode packet-based
set interfaces ge-0/0/0.0 family inet address

Now verify connectivity via ping:

Working ping – good so far

Configuring DG Failover

Configuring RPM (Real Time Performance Monitor) for our probe

Next on R1 we’ll configure the RPM section, using fairly standard values seen in Cisco IPSLA (Thresholds, Timeouts, Counts etc). We’ll be pinging R2 via ge-0/0/0.0 on R1, hence our monitor config is done on R1. In the example, dg-probe is the owner, lab-ping is the probe name

set services rpm probe dg-probe test lab-ping target address
set services rpm probe dg-probe test lab-ping probe-count 3
set services rpm probe dg-probe test lab-ping probe-interval 2
set services rpm prove dg-probe test lab-ping probe-type icmp
set services rpm probe dg-probe test lab-ping test-interval 2
set services rpm probe dg-probe test lab-ping thresholds successive-loss 3
set services rpm probe dg-probe test lab-ping thresholds total-loss 3
set services rpm probe dg-probe test lab-ping next-hop

Configuring IP Monitoring to Modify the Static Route

set services ip-monitoring policy dg-failover-policy match rpm-probe dg-probe
set services ip-monitoring policy dg-failover-policy then preferred-route next-hop

Verifying our Config

First, we’ll check ping and the current static route – then we’ll check the RPM probe is showing as active

The Ping test and Static Route we configured
The Active RPM probe showing success

Failing Over

In order to check failover we’ll shut down ge-0/0/0.0 on R2

root@r2# set interfaces ge-0/0/0.0 disable

Now verify ping is failing and we have a new static route inserted:

Failing Back

root@r2# delete set interfaces ge-0/0/0.0 disable

Checking all is ok

PIng Replies and the Original Static in Situ

An ISP within an ISP – The Lab – Part 2

The Hardware

“Piglet was so excited at the idea of being Useful that he forgot to be frightened any more, and when Rabbit went on to say that Kangas were only Fierce during the winter months, being at other times of an Affectionate Disposition, he could hardly sit still, he was so eager to begin being useful at once.” 

― A.A. Milne, Winnie-the-Pooh

The last time I was handed a new server for the things I did would probably be around 2008. That was a good year, except for most things that happened.

Of course most things are Cloud now (and the servers I used to maintain are no exception) – there is still often value in legacy hardware as legacy is in the eye of the role holder.

Step forward, 2x HP Proliant DL360 G6s, decomissioned by our Cloud team a number of years back – but absolutely fine for lab purposes, which is all about config and not performance.

Our Lab Physical Hardware – HP Proliant DL360 G6s

Spec is decent for what we need, 2x Quad Xeon 2Ghz, 48ish GB of Ram each and around 300 GB of storage. We’ll stick to ESXi 6.5 (eval or free – its a lab) as its the last version that just about goes on them – just.

Physically, we only need 3 cables – management into NIC1 (vnic0) on each box, and a cable between them so we can split resources (NIC4 – aka vnic3) – 3 as its the magic number.

Next: vSwitches and vlans – and a lesson learned to do this well in advance.

An ISP within an ISP – The Lab – Part 1

First in a series of posts about rebuilding an ISP within an ISP.


This is a lab based on a network I maintained and evolved over a number of years, but sadly no more – but this lab is the redesign I had in mind based on what they needed so is here for a number of reasons:

  • To learn a new routing protocol (IS-IS)
  • To learn a new vendors network gear (JunOS)
  • To play with NXOS and see how inter-op with JunOS fairs
  • Mainly to address some technical debt within my own mind

The Network – Physical Overview

The Endgame is to achieve the network I’ve drawn in the diagram below – 3 primary sites (the lower 3 JunOS Routers, R1, R2, R3) talking to 2 central hubs (R4, R5). From here we’ll hang off 2 NxOS devices (R6, R7).

The NxOS devices will serve as endpoints for a cloud-y infrastructure (some small Linux VMS for this lab) , ideally with dual v4/v6 throughput with the aim of providing redundancy (lets assume there is some VM/HA/replication between the two).

Firewalls will be via a pair of ASA-vs in active/standby and to top things off, we’ll use R4 and R5 to talk to R6 and R7 which are eBGP speakers providing us a default.

The Network – Logical Overview

There’s a few things to add to this to make things slightly more complex, specifically we’ll need 2 sets of L3VPN, keeping the global routing table for ‘Interwebs’.

  • Staff – RFC1918 – – which breaks out via the firewalls and is where the Cloud servers will live
  • Tenant – RFC1918 – – again, breaks out via the firewalls via seperate interface / dot1q.

We’ll also need some sort of l2/l3 constructs – namely:

  • A way for the firewalls to talk to each other in active/standby (or active/active)
  • The Cloud Servers to be within the same subnet

Will it all work ?

At this point I have no idea whatsoever – I’ve not touched nxOS, JunOS (much in anger) or IS-IS … but there is only one way to find out.

God Loves a Trier…

Mr Thomas, Mortimer Primary School, 1988.

In my leaving book the one I always remembered – unsure if that was about himself, or me, but hey, I’ve been called a trying individual before =o)

Follow this project as we move onto Part 2: The Hardware

New Year, New Guitar – Fender Affinity Strat

Every so often you see a deal you in some respects can’t ignore on ebay/marketplace. I’ve been after something to upgrade the Peavey Strat for a while, ideally to make it a more SRV type – and this came up, stunning Metallic Red Affinity Strat for £50.

The bad news, its a Indonesian 2006, which means thin body, but other than a missing strap pin (easy fix) and the string I snapped, its pristine.

Not bad when the price range for these is £75 – £180 (really!) … thinking Iron Gear Texas Locos to see how they sound and a major setup, check out the bridge and pickup height (nuts):