We’re at a point in our POCs whereas we’re looking at how we deploy configs automagically. In the Juniper world, we have ansible, pyez and the new kid on the block: JTAF.
All code/data referenced in this can be found in the github repo: https://github.com/cruse1977/blog-juniper-jtaf-example
What is JTAF
JTAF is a way of deploying Juniper configs using Hashicorps Terraform, commonly used for deploying cloud-y based infrastructure. Terraform has its own language whereas we define resources and actions then terraform plans and applies this change.
There are a number of videos on Youtube on JTAF, starting with Dave Gee’s introduction and following on into individual steps, with the videos being listed on the JTAF github site. Whilst these videos are good, I found text on screen hard to read hence following was more difficult, so ended up using the DETAILED INSTRUCTIONS then documenting in the form of a HOWTO here.
JTAF Lab: The Infrastructure
To lab this I’m using a single ESXi machine. With windows 11 about to drop, windows 10 old desktops are cheap and cheerful on ebay, hence this is an old HP Elitedesk 800 G1 desktop, with 16GB of RAM.
VM’s consist of:
- 1x Linux Ubuntu VM for terraform
- 1x vMX
- 1x vQFX
- 1x Linux VM acting as a simple client.
All VM’s except the client have an interface into a management network (192.168.1.0/24) for communication via SSH/netconf and we’ve set up the vQFX and vMX as an evpn vxlan fabric, utilising centrally-routed bridging & EBGP underlay, IBGP overlay.
We’re using a management instance on the Juniper devices to keep this separate from the routing table.

JTAF Lab: The aims
So what are we trying to achieve here? well, essentially deploy a ‘customer’.
On the vQFX this consists of a single access port in a vlan, a vni defined and relevant vxlan import config. On the vMX side we need an irb, vni and relevant vxlan config.
We’ll verify success via ping from the client to its irb gateway and a populated evpn database. }
Customer Data:
Attribute | Value |
vlan | 10 |
vni | 5010 |
irb ip | 192.168.2.3/24 |
floating gateway | 192.168.2.1 |
Client IP: | 192.168.2.2/24 |
Getting things ready
We’re going to need to create some directories and install some things to make a structure for building and running the JTAF providers, remembering we’ll need a vMX provider as well as a vQFX provider.
First lets do some pre-reqs, namely Terraform, and Go:
- curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add –
- sudo apt-add-repository “deb [arch=$(dpkg –print-architecture)] https://apt.releases.hashicorp.com $(lsb_release -cs) main”
- sudo apt-get update
- sudo apt-get install terraform golang
Now create a venv and install pyang (used by JTAF to transpose YANG to XPATH/XML)
- python3 -m venv venv
- source venv/bin/activate
- pip3 install pyang
Make some directories for our project:
- mkdir build
- mkdir build/vqfx
- mkdir build/vqfx/provider
- mkdir build/vmx
- mkdir build/vmx/provider
- mkdir workspace
Resulting in (note, venv excluded from graphic):

Finally, lets grab a copy of JTAF and build the processProviders and processYang commands.
- git clone https://github.com/Juniper/junos-terraform.git
- cd junos-terraform/cmd/processProviders
- go build
- cd ../processYang
- go build
We’re now in a position with JTAF ready to go build out providers.
Building Our Providers
We create the providers based on a standard methodology, this is:
- Download YANG files
- Create .toml config file
- Copy relevant YANG files to our working directory
- Run the JTAF processYang command
- Generate our XPATH requirements
- Run the JTAF processProviders command
- Generate our provider
As we only need the YANG files once, lets grab them now:
- git clone https://github.com/Juniper/yang.git
Generating our QFX Provider
Junipers YANG files are organised by version, then platform – our QFX is running v19.4r1.10 hence we need the YANG files from yang/19.4/19.4R1/junos-qfx/conf.
Given we’re running an evpn/vxlan fabric, we need vlans, interfaces, protocols & policy options. Additionally we’ll need root, and critically we also need the files from the common directory within the version. At the time of writing, this isn’t included in the build instructions, hence one to watch out for.
Lets get those now:
- cp yang/19.4/19.4R1/junos-qfx/conf/junos-qfx-conf-interfaces* build/vqfx/
- cp yang/19.4/19.4R1/junos-qfx/conf/junos-qfx-conf-policy-options* build/vqfx/
- cp yang/19.4/19.4R1/junos-qfx/conf/junos-qfx-conf-protocols* build/vqfx/
- cp yang/19.4/19.4R1/junos-qfx/conf/junos-qfx-conf-root* build/vqfx/
- cp yang/19.4/19.4R1/junos-qfx/conf/junos-qfx-conf-vlans* build/vqfx/
- cp yang/19.4/19.4R1/common/junos-common-* build/vqfx/
Next we create a config.toml, which will be used by the JTAF tooling to locate files and where to build.
vi build/vqfx/config.toml
The contents of that file should look similar to the below, with <home dir> replaced with your home directory. Note, at the time of writing the providerName is missing from some of the documentation, this is crucial to successfully build the provider.
yangDir = "<home dir>/build/vqfx" providerDir = "<home dir>/build/vqfx/provider" xpathPath = "<home dir>/build/vqfx/xpath_test.xml" fileType = "both" providerName = "vqfx"
Now go ahead and generate the xpath and yin files:
- (venv) root@netbox:/home/russellc# junos-terraform/cmd/processYang/processYang -config=build/vqfx/config.toml
------------------------------------------------------------------------------------------
- Creating Yin files from Yang file directory: /home/russellc/build/vqfx -
------------------------------------------------------------------------------------------
Yin file for junos-common-ddl-extensions@2019-01-01 is generated
Yin file for junos-common-odl-extensions@2019-01-01 is generated
Yin file for junos-common-types@2019-01-01 is generated
Yin file for junos-qfx-conf-interfaces@2019-01-01 is generated
Yin file for junos-qfx-conf-policy-options@2019-01-01 is generated
Yin file for junos-qfx-conf-protocols@2019-01-01 is generated
Yin file for junos-qfx-conf-root@2019-01-01 is generated
Yin file for junos-qfx-conf-vlans@2019-01-01 is generated
--------------------------------------------
- Creating _xpath files from the Yin files -
--------------------------------------------
Creating Xpath file: junos-common-ddl-extensions@2019-01-01_xpath.txt
Creating Xpath file: junos-common-ddl-extensions@2019-01-01_xpath.xml
Creating Xpath file: junos-common-odl-extensions@2019-01-01_xpath.txt
Creating Xpath file: junos-common-odl-extensions@2019-01-01_xpath.xml
Creating Xpath file: junos-common-types@2019-01-01_xpath.txt
Creating Xpath file: junos-common-types@2019-01-01_xpath.xml
Creating Xpath file: junos-qfx-conf-interfaces@2019-01-01_xpath.txt
Creating Xpath file: junos-qfx-conf-interfaces@2019-01-01_xpath.xml
Creating Xpath file: junos-qfx-conf-policy-options@2019-01-01_xpath.txt
Creating Xpath file: junos-qfx-conf-policy-options@2019-01-01_xpath.xml
Creating Xpath file: junos-qfx-conf-protocols@2019-01-01_xpath.txt
Creating Xpath file: junos-qfx-conf-protocols@2019-01-01_xpath.xml
Creating Xpath file: junos-qfx-conf-root@2019-01-01_xpath.txt
Creating Xpath file: junos-qfx-conf-root@2019-01-01_xpath.xml
We now have our xpath files, we need to work out which xpaths we’ll need to run our various commands. Do this via interrogating the various txt files created in the build/vqfx directory.
As an example on the vQFX for the vlan, looking at junos-qfx-conf-vlans@2019-01-01_xpath.txt we’ll need:
/vlans/vlan/name
/vlans/vlan/vlan-id
/vlans/vlan/vxlan/vni
/vlans/vlan/vxlan/ingress-node-replication
Go through each file, noting each xpath then combine these into build/vqfx/xpath_test.xml.
<file-list> <xpath name="/vlans/vlan/name"/> <xpath name="/vlans/vlan/vlan-id"/> <xpath name="/vlans/vlan/vxlan/vni"/> <xpath name="/vlans/vlan/vxlan/ingress-node-replication"/> <xpath name="/policy-options/policy-statement/term/from/community"/> <xpath name="/policy-options/policy-statement/term/then/accept"/> <xpath name="/policy-options/community/members"/> <xpath name="/protocols/evpn/vni-options/vni/vrf-target/community"/> <xpath name="/protocols/evpn/extended-vni-list"/> <xpath name="/interfaces/interface/unit/family/ethernet-switching/vlan/members"/> <xpath name="/interfaces/interface/mtu"/> <xpath name="/interfaces/interface/unit/mtu"/> <xpath name="/interfaces/interface/unit/family/ethernet-switching/interface-mode"/> <xpath name="/interfaces/interface/unit/family/inet"/> <xpath name="/interfaces/interface/description"/> <xpath name="/interfaces/interface/disable"/> </file-list>
Now lets build our provider code using processProvider, which will generate go code into build/vqfx/provider
- # junos-terraform/cmd/processProviders/processProviders -config=build/vqfx/config.toml
------------------------------------------------------------ - Autogenerating Terraform Provider code from _xpath files - ------------------------------------------------------------ Terraform API resource_VlansVlanName created Terraform API resource_VlansVlanVlan__Id created Terraform API resource_VlansVlanVxlanVni created Terraform API resource_VlansVlanVxlanIngress__Node__Replication created Terraform API resource_Policy__OptionsPolicy__StatementTermFromCommunity created Terraform API resource_Policy__OptionsPolicy__StatementTermThenAccept created Terraform API resource_Policy__OptionsCommunityMembers created Terraform API resource_ProtocolsEvpnVni__OptionsVniVrf__TargetCommunity created Terraform API resource_ProtocolsEvpnExtended__Vni__List created Terraform API resource_InterfacesInterfaceUnitFamilyEthernet__SwitchingVlanMembers created Terraform API resource_InterfacesInterfaceMtu created Terraform API resource_InterfacesInterfaceUnitMtu created Terraform API resource_InterfacesInterfaceUnitFamilyEthernet__SwitchingInterface__Mode created Terraform API resource_InterfacesInterfaceUnitFamilyInet created Terraform API resource_InterfacesInterfaceDescription created Terraform API resource_InterfacesInterfaceDisable created -------------------------------------------------------------------------------- Number of Xpaths processed: 16 Number of potential issues: 0 --------------------------------------------- - Copying the rest of the required Go files - --------------------------------------------- Copied file: config.go to /home/kitsupport/workspace/new/build/vqfx/provider Copied file: main.go to /home/kitsupport/workspace/new/build/vqfx/provider Copied file: resource_junos_destroy_commit.go to /home/kitsupport/workspace/new/build/vqfx/provider Copied file: resource_junos_device_commit.go to /home/kitsupport/workspace/new/build/vqfx/provider ------------------- - Creating Go Mod - -------------------
Finally, build our provider
- cd build/vqfx/provider
- go build
All being well, this should create a file “terraform-provider-junos-vqfx“, congratulations, we’ve just built our vqfx provider!
Generating our MX Provider
Generating the MX provider is similar to the vqfx provider, however we’ll utilise different YANG files. Our vmx is running 18.2r1.9, the closest YANG being 18.2R3, so we’ll use that, likewise there is no specific MX YANG hence we’ll use the default Junos ones.
- cp yang/18.2/18.2R3/junos/conf/junos-conf-interfaces* build/vmx/
- cp yang/18.2/18.2R3/junos/conf/junos-conf-policy-options* build/vmx/
- cp yang/18.2/18.2R3/junos/conf/junos-conf-root* build/vmx/
- cp yang/18.2/18.2R3/junos/conf/junos-conf-routing-instances* build/vmx/
- cp yang/18.2/18.2R3/common/junos-common-* build/vmx/
Our TOML file:
providerDir = "<home dir>/build/vmx/provider" xpathPath = "<home dir>/build/vmx/xpath_test.xml" fileType = "both" providerName = "vmx"
Our xpath_test.xml
<file-list> <xpath name="/interfaces/interface/unit/family/inet/address"/> <xpath name="/interfaces/interface/unit/virtual-gateway-accept-data"/> <xpath name="/interfaces/interface/unit/family/inet/address/virtual-gateway-address"/> <xpath name="/policy-options/policy-statement/term/from/community"/> <xpath name="/policy-options/policy-statement/term/then/accept"/> <xpath name="/policy-options/community/members"/> <xpath name="/routing-instances/instance/bridge-domains/domain/vxlan/vni"/> <xpath name="/routing-instances/instance/bridge-domains/domain/vlan-id"/> <xpath name="/routing-instances/instance/bridge-domains/domain/routing-interface"/> <xpath name="/routing-instances/instance/bridge-domains/domain/domain-type"/> <xpath name="/routing-instances/instance/bridge-domains/domain/no-arp-suppression"/> <xpath name="/routing-instances/instance/protocols/evpn/vni-options/vni/vrf-target/community"/> <xpath name="/routing-instances/instance/protocols/evpn/extended-vni-list"/> <xpath name="/interfaces/interface/unit/proxy-macip-advertisement"/> <xpath name="/routing-instances/instance/bridge-domains/domain/vxlan/ingress-node-replication"/> </file-list>
Following the same process as the QFX we should end up with an MX provider: terraform-provider-junos-vmx
Creating our Terraform Workspace
To create the terraform workspace, we need to make some directories and move our shiny new providers into this. Lets do this now. Our providers will be based in directories dependent on domain, org, version and arch hence:
- mkdir -p workspace/plugins/pulsant.net/nops/junos-vmx/18.2.0/linux_amd64
- mv build/vmx/provider/terraform-provider-junos-vmx workspace/plugins/pulsant.net/nops/junos-vmx/18.2.0/linux_amd64/
- mkdir -p workspace/plugins/pulsant.net/nops/junos-vqfx/19.4.0/linux_amd64
- mv build/vqfx/provider/terraform-provider-junos-vqfx workspace/plugins/pulsant.net/nops/junos-vqfx/19.4.0/linux_amd64/
Finally, we create a .terraformrc in our home directory, and reference our workspace within:
provider_installation { filesystem_mirror { path = "<home dir>/workspace/plugins" include = ["*/*/*"] } }
Finally, we create workspace/deploy_customer.tf file to test our providers:
terraform { required_providers { junos-vqfx = { source = "pulsant.net/nops/junos-vqfx" version = ">= 19.4.0" } junos-vmx = { source = "pulsant.net/nops/junos-vmx" version = ">= 18.2.0" } } } provider "junos-vqfx" { host = "192.168.1.132" port = 22 username = "netconf" password = "Juniper123" sshkey = "" } provider "junos-vmx" { host = "192.168.1.131" port = 22 username = "netconf" password = "Juniper123" sshkey = "" }
We now test by changing directory to our workspace dir, and running terraform init:

Writing our HCL
Finding our Terraform “functions” and “args
To find what we need to put into our HCL, we consult terraform to query its provider schemas and output json we can then use elsewhere.
- terraform providers schema -json

copy/paste the entirity of the json into vscode right click on the pasted json and select Command Palette then “format document”, you may need to a set a processor here, but we’ll end up with readable, indented json:

Lets take “junos-vmx_InterfacesInterfaceUnitFamilyInetAddressVirtual__Gateway__Address” as an example, so find that:

from this we can ascertain that the parameters we need here map to:
Parameter Name | Description | Example |
name | interface name | irb |
name__1 | interface unit | 0 |
name__2 | interface ip | 192.168.2.3/24 |
virtual__gateway__address | virtual gateway address | 192.168.2.1 |
hence our terraform “code” becomes:
resource "junos-vmx_InterfacesInterfaceUnitFamilyInetAddressVirtual__Gateway__Address" "demo" {
resource_name = "JUNOS_CONFIG_GROUP"
name = "irb"
name__1 = "5010"
name__2 = "192.168.2.3/24"
virtual__gateway__address = "192.168.2.1"
}
It’s very important to note at this point, RESOURCE_NAME needs to be unique. In Junos land this translates to apply groups, hence if these names aren’t unique, whilst they will appear far more visually appealing as one large group, this means terraform, being based on key/value pairs, loses state and is unable to re-plan.
Best practise is to name these in numerical incrementing order – ie: <identifier>_1, <indentifier>_2, etc.
From this point on its a case of going through each function and writing the associated HCL. It’s far easier to use modules for each device, we also need to add these modules + the commit/destroy commit. Modify our deploy_customer.tf to add the destroy/commits and a module for each device:
module "qfx_1" { source = "./qfx_1" providers = { junos-vqfx = junos-vqfx } depends_on = [junos-vqfx_destroycommit.commit-main] } module "vmx_1" { source = "./vmx_1" providers = { junos-vmx = junos-vmx } depends_on = [ junos-vmx_destroycommit.commit-main ] } resource "junos-vqfx_commit" "commit-main" { resource_name = "commit" depends_on = [module.qfx_1] } resource "junos-vqfx_destroycommit" "commit-main" { resource_name = "destroycommit" } resource "junos-vmx_commit" "commit-main" { resource_name = "commit" depends_on = [module.vmx_1] } resource "junos-vmx_destroycommit" "commit-main" { resource_name = "destroycommit" }
Next, make directories for vqfx_1 and vmx_1, and create main.tf files within each to reference our providers.
vqfx_1/main.tf:
terraform { required_providers { junos-vqfx = { source = "pulsant.net/nops/junos-vqfx" version = ">= 19.4.0" } } }
vmx_1/main.tf:
terraform { required_providers { junos-vmx = { source = "pulsant.net/nops/junos-vmx" version = ">= 18.2.0" } } }
Within each file we then add the “resource” entries we discussed above to run the actual commands.
I’ve included these in the github repository but I’ve taken this a step further with templates and modules which I’ll go through now.
The Network : As Data
Part of our current thinking is digital transformation, which includes defining customers as data entries. Lets define our customer as a JSON data structure in data.json:

From here, some quick python to apply Jinja2 templates to the data to generate the HCL files:

So what do our templates look like ? essentially these are the resource lines we worked out, with substitution for our variables, we also use a dynamic iterator to ensure uniqueness on the apply groups:

Again, the code for all of this is in the github repository
Testing
Overall, we have 30 steps to deploy our single irb, single port over two devices – we verify this by first checking ping fails from our client:

then:
- terraform init
- terraform plan
- terraform apply
and …

We’ve just deployed a customer, from data, via terraform onto Junos.
Final Thoughts
Firstly, I want to thank David Gee from Juniper for his assistance in getting JTAF up and running and an excellent chat we had on the project which was extremely useful.
The JTAF project shows a lot of promise and works well, the ability to easily plan, destroy and state config is incredibly useful. For pure Terraform shops and places with a certain set of configuration this will work well, however, due to Terraforms operation (rather than JTAF specifically), 13 apply groups for a simple IRB may see a huge increase in complexity – ie: 400 irbs, would than become 4300 apply groups – whilst this may/may not be a problem in terms of JUNOS and commits, readability becomes very difficult.
But then, if we’re managing customers via Terraform, should we ever really be showing the configuration manually ? a story for another day …
No responses yet