Using Netbox as your Network SoT
If you’ve looked at Network Automation at all, you’ve most likely heard that the first step in your automation journey is to have a Network Source of Truth and that NetBox is the best solution out there for that.
Let’s look at why there’s so much consensus in the industry about using NetBox and how it enables you to build the foundation layer of your automation pipeline.
Reality, however, isn’t always so straightforward:
I’ll walk through what NetBox is, why the industry has converged on it, its trade-offs, and how you can extend it to fit your use case.
1. What’s NetBox?
NetBox is a web application for modeling and documenting network infrastructure. Think IPAM (IP Address Management) plus DCIM (Data Centre Infrastructure Management) in one tool. It tracks your devices, racks, cables, IP addresses, VLANs, circuits. Basically everything in your physical and logical network topology.
Now, let’s look at what NetBox is at its most fundamental level. Why does this matter? Understanding its architecture helps you grasp both its limitations and how you can extend it to fit your needs.
Here’s a high-level view of NetBox’s architecture:
NetBox is built in Python using Django, with Django ORM handling interactions with its PostgreSQL database. Redis serves dual purposes: caching (for performance) and task queuing (for background jobs like webhook deliveries, report generation, and custom script execution). These use separate Redis databases within the same instance, with background tasks processed asynchronously by workers. For caching, Django checks Redis before querying PostgreSQL, significantly improving performance for repeated queries.
At its core, NetBox is a collection of Django apps. The diagram shows the main ones: DCIM (data centre infrastructure), IPAM (IP address management), Circuits (provider circuits and connections), Tenancy (multi-tenancy and contacts), and Extras (custom fields, tags, webhooks). Each app defines models, views, and API endpoints. Plugins work exactly the same way, they’re just Django apps that extend NetBox’s functionality.
This matters because everything in NetBox follows Django’s patterns. Models, views, serializers, and so on.
SELECT * FROM devices WHERE status='active', you write Device.objects.filter(status='active'). The ORM translates your Python into SQL. It’s a higher-level abstraction that makes database operations more intuitive for developers.2. Why NetBox Fits
The industry has converged on NetBox for good reasons:
2.1 Free and open-source
Apache 2.0 licensed. No licensing costs, no vendor lock-in. You can deploy it, modify it, or fork it without legal concerns.
2.2 Comprehensive base data model
Out of the box, NetBox understands devices, interfaces, VLANs, IP addresses, circuits, racks, sites, and more. The relationships between these objects are already mapped: connect an interface to a circuit, assign an IP to an interface, place a device in a rack.
2.3 Combines IPAM and DCIM
Previously, you’d need separate tools for IP Address Management and Data Centre Infrastructure Management. NetBox does both, eliminating the need to sync between multiple systems.
2.4 Powerful APIs
Both REST and GraphQL APIs are available from day one. Every operation you can do in the UI, you can do via API. Everything through pynetbox or direct API calls.
2.5 Active community
Over 19,000 GitHub stars and hundreds of contributors. NetBox Labs backs the project with continuous development. Active Slack channel for community support.
2.6 Clean and modern UI
Operations teams actually adopt it because the interface is intuitive.
2.7 Data validation and integrity
NetBox won’t let you create invalid data. VLANs must be 1-4094, IP addresses must be valid CIDR, devices need device types. This enforcement prevents the garbage-in-garbage-out problem that breaks automation.
2.8 Designed for extensibility
Custom fields, config contexts, plugins, tags. NetBox gives you the tools to extend its data model without forking the codebase. When the base model doesn’t fit your use case, you adapt NetBox rather than working around it.
2.9 Multi-tenancy support
NetBox has native multi-tenancy. If you manage infrastructure for multiple customers or business units, tenants, contacts, and permissions are first-class objects in the data model.
2.10 Easy deployment
You can get NetBox running in minutes. Here’s the Docker setup:
git clone -b release https://github.com/netbox-community/netbox-docker.git
cd netbox-docker
# Copy the example override file
cp docker-compose.override.yml.example docker-compose.override.yml
# Read and edit the file to your liking
docker compose pull
docker compose upIf your company runs containers in production, your DevOps team can deploy this like any other application they already manage. If you don’t want to self-host, you can use cloud services like AWS ECS, Google Cloud Run, or Azure Container Instances. Or skip the infrastructure entirely with NetBox Labs’ hosted service.
3. NetBox Limitations
Now for the trade-offs. NetBox’s base data model is opinionated, and whilst that helps you start quickly, it can become constraining. Some features are missing due to development priorities and resources. Others are intentional scope decisions where NetBox stays out of areas better handled by specialized tools.
Here are some things NetBox doesn’t handle natively:
3.1 Routing configurations
No support for static routes, dynamic routing protocols (BGP, OSPF, EIGRP, IS-IS), MPLS labels, LSPs, LDP, RSVP-TE, segment routing, multicast routing (PIM, IGMP), BFD, policy-based routing, route-maps, prefix-lists, or AS-path filters.
3.2 Layer 2 features
No native support for spanning tree configuration, LACP settings, QoS policies, 802.1X authentication, port security (MAC address limits, sticky MAC, violation actions), storm control, or UDLD. CDP and LLDP are used by discovery tools that integrate with NetBox, but you can’t configure the protocols themselves.
3.3 Security policies
No support for ACLs, firewall rules, NAT policies, security zones, object groups, or service objects.
3.4 Service configurations
No support for SNMP communities, NTP servers, syslog destinations, DNS servers, or AAA servers (TACACS+, RADIUS).
3.5 Secrets management
No native support. The secrets functionality was removed in NetBox 3.0. No built-in encryption, rotation, versioning, or audit logging for secrets.
3.6 DCIM features
No visual floor plan representation showing rack placement and orientation on data centre floor layouts. No warehouse or spare parts inventory management (you can’t track quantities of uninstalled optics, line cards, or spare devices). No document management for attachments like vendor quotes, maintenance contracts, or floor plans.
With these gaps, how do you get from NetBox data to complete device configurations? The answer is NetBox’s extensibility.
4. Extending NetBox
NetBox’s architecture supports extensions by design. The developers know their base data model, whilst comprehensive, won’t cover every business case. I’ve found it helpful to think about the six extension mechanisms in two categories.
- Data modeling extensions: (tags, custom fields, config contexts, plugins) store and structure your network data.
- Automation mechanisms: (webhooks, custom scripts) don’t store data but instead react to NetBox events or perform bulk operations.
4.1 Tags
Searchable, filterable, and accessible via API. Use tags to organize and filter objects in NetBox’s UI or API queries (find all devices with a specific tag, filter interfaces by tag), or drive automation logic in your Jinja2 templates. Here are some common template patterns:
4.1.1 Boolean tags
Control whether configuration blocks render. Interface tagged “dhcp-relay” includes DHCP helper configuration, device tagged “bgp-speaker” renders BGP stanzas. In hosting environments, interface tags like “pub”, “prv”, or “ipmi” determine whether ports get public, private, or IPMI configurations:
{% if 'pub' in interface.tags %}
description Public-facing {{ interface.name }}
{% for ip in interface.ip_addresses %}
{% if ip.address.version == 4 %}
ip address {{ ip.address.ip }} {{ ip.address.netmask }}
ip access-group PUBLIC-IN in
ip access-group PUBLIC-OUT out
{% else %}
ipv6 address {{ ip.address.ip }}/{{ ip.address.prefixlen }}
ipv6 traffic-filter PUBLIC-IN-V6 in
ipv6 traffic-filter PUBLIC-OUT-V6 out
{% endif %}
{% endfor %}
no ip proxy-arp
no ip redirects
no ip unreachables
{% elif 'prv' in interface.tags %}
description Private network {{ interface.name }}
{% for ip in interface.ip_addresses %}
{% if ip.address.version == 4 %}
ip address {{ ip.address.ip }} {{ ip.address.netmask }}
{% endif %}
{% endfor %}
ip access-group INTERNAL-IN in
ip helper-address 10.0.0.100
{% elif 'ipmi' in interface.tags %}
description Out-of-band management
{% for ip in interface.ip_addresses %}
{% if ip.address.version == 4 %}
ip address {{ ip.address.ip }} {{ ip.address.netmask }}
{% endif %}
{% endfor %}
ip access-group IPMI-RESTRICT in
storm-control broadcast level 10.00
spanning-tree portfast
{% endif %}! Input: Interface GigabitEthernet0/1 tagged with "pub"
! Input: Interface has IPv4 10.1.1.1/24 and IPv6 2001:db8::1/64
interface GigabitEthernet0/1
description Public-facing GigabitEthernet0/1
ip address 10.1.1.1 255.255.255.0
ip access-group PUBLIC-IN in
ip access-group PUBLIC-OUT out
ipv6 address 2001:db8::1/64
ipv6 traffic-filter PUBLIC-IN-V6 in
ipv6 traffic-filter PUBLIC-OUT-V6 out
no ip proxy-arp
no ip redirects
no ip unreachables4.1.2 Key-value tags
Use tags with embedded values by parsing the string after an equals sign. This pattern lets you store simple parameters in tag names. For example, OSPF interface parameters: “ospf_cost=500” becomes ip ospf cost 500, “ospf_priority=255” sets the DR election priority, “ospf_hello=5” configures the hello interval timer, “ospf_dead=20” sets the dead interval timer, “ospf_network_type=point-to-point” sets the network type:
interface {{ interface.name }}
description {{ interface.description }}
ip address {{ interface.ip_addresses[0].address.ip }} {{ interface.ip_addresses[0].address.netmask }}
{% for tag in interface.tags %}
{% if tag.startswith('ospf_') %}
{% set param, value = tag.split('=') %}
{% if param == 'ospf_cost' %}
ip ospf cost {{ value }}
{% elif param == 'ospf_priority' %}
ip ospf priority {{ value }}
{% elif param == 'ospf_hello' %}
ip ospf hello-interval {{ value }}
{% elif param == 'ospf_dead' %}
ip ospf dead-interval {{ value }}
{% elif param == 'ospf_network_type' %}
ip ospf network {{ value }}
{% endif %}
{% endif %}
{% endfor %}! Input: Interface GigabitEthernet0/2 with description "Core uplink"
! Input: Interface has IP 10.0.1.1/30
! Input: Interface tagged with "ospf_cost=500", "ospf_priority=255", "ospf_network_type=point-to-point"
interface GigabitEthernet0/2
description Core uplink
ip address 10.0.1.1 255.255.255.252
ip ospf cost 500
ip ospf priority 255
ip ospf network point-to-point4.1.3 Hierarchical tags
Use colon-separated tags to represent nested hierarchies, then split on colons to extract each level. This pattern enables progressive configuration at each tier of the hierarchy. In ISP environments, prefixes are tagged with relationship type and geographic scope to drive routing policies with well-defined BGP communities and local preferences:
{% for tag in prefix.tags %}
{% if tag.name.startswith('route:') %}
{% set levels = tag.name.split(':') %}
{# Define BGP communities for relationship types #}
{% set relationship_map = {
'customer': {'community': '65000:100', 'local_pref': 200},
'peer': {'community': '65000:200', 'local_pref': 150},
'transit': {'community': '65000:300', 'local_pref': 100}
} %}
{# Define BGP communities for geographic regions #}
{% set region_map = {
'global': '65000:1000',
'emea': '65000:2000',
'americas': '65000:3000',
'apac': '65000:4000'
} %}
{# Extract relationship type and region from tag #}
{% set relationship = levels[1] %}
{% set region = levels[2] %}
route-map {{ relationship|upper }}_{{ region|upper }} permit 10
match ip address prefix-list {{ prefix.prefix }}
{% if relationship in relationship_map %}
set community {{ relationship_map[relationship].community }} additive
{% endif %}
{% if region in region_map %}
set community {{ region_map[region] }} additive
{% endif %}
{% if relationship in relationship_map %}
set local-preference {{ relationship_map[relationship].local_pref }}
{% endif %}
{% endif %}
{% endfor %}! Input: Prefix 192.0.2.0/24 tagged with "route:customer:emea"
! Relationship: customer (highest preference, community 65000:100)
! Region: EMEA (community 65000:2000)
route-map CUSTOMER_EMEA permit 10
match ip address prefix-list 192.0.2.0/24
set community 65000:100 additive
set community 65000:2000 additive
set local-preference 200
! Input: Prefix 198.51.100.0/24 tagged with "route:peer:americas"
! Relationship: peer (medium preference, community 65000:200)
! Region: Americas (community 65000:3000)
route-map PEER_AMERICAS permit 10
match ip address prefix-list 198.51.100.0/24
set community 65000:200 additive
set community 65000:3000 additive
set local-preference 150
! Input: Prefix 203.0.113.0/24 tagged with "route:transit:apac"
! Relationship: transit (lowest preference, community 65000:300)
! Region: APAC (community 65000:4000)
route-map TRANSIT_APAC permit 10
match ip address prefix-list 203.0.113.0/24
set community 65000:300 additive
set community 65000:4000 additive
set local-preference 100Tags work well for simple flags and basic categorisation, but they have significant limitations for complex networks. Since tags are just strings, there’s no data type validation. “ospf_cost=five-hundred” is valid as a tag name but breaks your template at render time. You can’t model relationships or enforce constraints. Multiple BGP peers become “bgp_peer_1=10.0.0.1”, “bgp_peer_2=10.0.0.2” with no way to validate the IP addresses or ensure they belong together.
Tag management becomes unwieldy at scale. You need to pre-create every possible tag combination, so “ospf_cost=500”, “ospf_cost=501”, “ospf_cost=502” all exist as separate tags. Your Jinja2 templates grow in complexity with tag parsing logic, error handling for malformed values, and type conversions. Auditing becomes difficult. Finding all OSPF costs above 100 requires pulling everything and parsing tag strings. The hierarchical tag example with routing policies can work for ISP-style use cases with well-defined community structures, but it adds template complexity quickly. Whether tags are the right choice always comes down to your specific use case.
4.2 Custom fields
Custom fields add typed attributes to existing NetBox objects without requiring new models. They’re ideal for simple, flat attributes that address NetBox’s gaps: secret vault references (SNMP community path, TACACS+ key path), router IDs for OSPF/BGP, software EOL dates for lifecycle reporting, support contract expiry dates for renewal tracking, and similar single-value parameters. Unlike tags which are string labels, custom fields enforce data types (integer, date, URL, JSON, boolean) with syntactic validation. Tags group and label, custom fields store typed data.
Custom fields are created in the NetBox UI (Customization → Custom Fields) or via API. You specify the object type (device, interface, circuit, etc.), data type (text, integer, boolean, date, URL, JSON, selection lists, among others), and whether it’s required. Once created, the field appears in the object’s edit form and is accessible via API and templates.
from pynetbox import api
# Connect to NetBox
nb = api(url='http://netbox.local', token='your-api-token')
# Accessing custom fields via API
device = nb.dcim.devices.get(name="router1")
# Lifecycle tracking
support_expiry = device.custom_fields['support_contract_expiry'] # 2026-03-15
software_eol = device.custom_fields['software_eol_date'] # 2025-12-31
# Protocol configuration
router_id = device.custom_fields['router_id'] # 192.0.2.1
# Secret references (not secrets themselves)
snmp_path = device.custom_fields['snmp_secret_path'] # "secret/data/network/snmp"
tacacs_path = device.custom_fields['tacacs_key_path'] # "secret/data/network/tacacs"In Jinja2 templates, custom fields provide typed data for configuration generation:
! Router ID from custom field
router ospf 1
router-id {{ device.custom_fields.router_id }}
router bgp 65000
bgp router-id {{ device.custom_fields.router_id }}Custom fields are particularly useful for storing secret references (not the secrets themselves). Store vault paths or ARNs in custom fields, and retrieve actual secrets from HashiCorp Vault or AWS Secrets Manager during config generation:
# Custom field stores the vault path
snmp_path = device.custom_fields['snmp_secret_path'] # "secret/data/network/snmp"
tacacs_path = device.custom_fields['tacacs_key_path'] # "secret/data/network/tacacs"
# Retrieve actual secrets from Vault at render time
snmp_community = vault_client.read(snmp_path)['data']['community']
tacacs_key = vault_client.read(tacacs_path)['data']['key']
# Use in config generation
config = f"""
snmp-server community {snmp_community} RO
tacacs server TACACS1
address ipv4 10.0.0.100
key {tacacs_key}
"""Custom fields improve on tags by providing data type validation and native API querying. You can filter devices by support_contract_expiry < today or software_eol_date < 2026-01-01 for lifecycle reporting. NetBox enforces valid integers, dates, and URLs at input time rather than breaking templates at render time.
However, custom fields are flat key-value pairs attached to objects. You can’t model relationships between data or enforce cross-field validation. Want to ensure OSPF hello interval is less than dead interval? Not possible with custom fields alone.
Need to store multiple BGP peers with their own attributes like peer IP, remote AS, password, and route policies? You end up with “bgp_peer_1_ip”, “bgp_peer_1_as”, “bgp_peer_2_ip”, “bgp_peer_2_as” which is only marginally better than tags. No foreign keys, no cascading updates, no referential integrity.
For complex structured data with relationships and business logic, move to config contexts (if read-only JSON suffices) or plugins (if you need a proper data model with validation and relationships).
4.3 Config contexts
JSON data attached to devices and virtual machines based on their attributes. Use these for configuration parameters that vary by role, site, or platform.
Config contexts are stored in JSON format in the database. The NetBox UI provides a YAML view for easier reading and editing, but the data is ultimately stored and retrieved as JSON. When you view or edit config contexts in the UI, you can toggle between JSON and YAML representations, but the underlying storage is always JSON.
Config contexts are created in the NetBox UI (Configuration → Config Contexts) or via API. You define the JSON data, assignment rules (which devices receive this context based on filters like site, role, platform, or tenant), and a weight value for controlling merge priority.
Here are two examples showing different assignment scopes:
Global NTP context (weight 100, no filters) applies to all devices:
{
"ntp_servers": ["10.0.0.1", "10.0.0.2"],
"syslog_servers": ["10.0.0.10"]
}BGP context (weight 400, role filter “spine”) assigned only to spine devices:
{
"bgp": {
"local_as": 65001,
"router_id": "192.0.2.1",
"peers": [
{
"ip": "192.0.2.2",
"remote_as": 65002
},
{
"ip": "192.0.2.3",
"remote_as": 65003
}
]
}
}Creating these same contexts via API with pynetbox:
from pynetbox import api
nb = api(url='http://netbox.local', token='your-api-token')
# Create global NTP context (no filters)
nb.extras.config_contexts.create(
name="Global NTP and Syslog",
weight=100,
data={
"ntp_servers": ["10.0.0.1", "10.0.0.2"],
"syslog_servers": ["10.0.0.10"]
}
)
# Create BGP context for spine devices
spine_role = nb.dcim.device_roles.get(name="spine")
nb.extras.config_contexts.create(
name="Spine BGP Configuration",
weight=400,
roles=[spine_role.id],
data={
"bgp": {
"local_as": 65001,
"router_id": "192.0.2.1",
"peers": [
{"ip": "192.0.2.2", "remote_as": 65002},
{"ip": "192.0.2.3", "remote_as": 65003}
]
}
}
)Config contexts merge and override based on weight. Weight is configured when creating the config context (not in the JSON data itself). Lower weights apply first, higher weights override. Use a consistent weight strategy to avoid conflicts:
Weight strategy:
100-199: Global defaults (NTP, syslog, DNS for all devices)
200-299: Regional settings (region-specific servers)
300-399: Site-specific (site NTP, local syslog)
400-499: Device role (spine BGP config, leaf MLAG config)
500-599: Platform (ios vs eos vs junos specific parameters)
600-699: Cluster/Pod (pod-specific settings)
700-799: Tag-based (production vs staging differences)
800-899: Special cases (temporary overrides, migrations)
900-999: Emergency overrides (break-glass changes)
1000: Local context (device-specific, always wins)A device with role “spine” in site “london-dc1” would receive merged data from multiple contexts:
- Global context (weight 100): NTP and syslog servers
- Site context (weight 300): overrides NTP with London-local servers
- Role context (weight 400): adds BGP configuration
The final merged context contains all data from applicable contexts. When the same JSON key (like “ntp_servers”) appears in multiple contexts, the higher-weight value wins.
In Jinja2 templates, config context data is accessed via the config_context attribute:
{% for ntp in device.config_context.ntp_servers %}
ntp server {{ ntp }}
{% endfor %}
router bgp {{ device.config_context.bgp.local_as }}
bgp router-id {{ device.config_context.bgp.router_id }}
{% for peer in device.config_context.bgp.peers %}
neighbor {{ peer.ip }} remote-as {{ peer.remote_as }}
{% endfor %}Config contexts improve on custom fields by supporting nested structured data. You can model complex hierarchies like BGP configuration with peers, address families, and route policies all in one document. The cascading weight-based merge lets you define global defaults and override them at increasingly specific levels (global → region → site → role → device). This works brilliantly for hierarchical configuration that applies broadly but needs site-specific or device-specific tweaks.
However, config contexts are read-only data stores with no native validation beyond JSON syntax. NetBox won’t prevent you from setting “local_as”: “sixty-five-thousand” or ensure BGP router IDs are valid IP addresses. There’s no schema enforcement, no referential integrity, no relationships to other NetBox objects. You can’t query “show me all devices with BGP AS 65001” without pulling every device’s config context and parsing the JSON. The UI is a basic text editor (with YAML view for convenience) with no autocomplete, no dropdowns for valid values, no field-level validation. Config contexts also don’t support versioning beyond NetBox’s generic change log.
Use config contexts for hierarchical configuration data that your templates consume. For data requiring validation, relationships to NetBox objects, or a structured UI for operations teams, you need plugins with proper data models.
4.4 Plugins
Plugins are the most powerful extension mechanism. They’re full Django applications that run inside NetBox, which means they can add entirely new data models, views, API endpoints, and business logic. Unlike custom fields or config contexts that work within NetBox’s existing structure, plugins extend NetBox’s data model itself.
When you install a plugin, it creates new database tables, adds menu items to the UI, registers new API endpoints, and integrates seamlessly with NetBox’s permissions and authentication. From the user’s perspective, plugin features look and feel like native NetBox functionality.
The community has built plugins that address many Section 3 limitations:
- netbox-acls: ACL and firewall rule management
- netbox-bgp: BGP sessions, communities, routing policies, prefix lists
- netbox-inventory: Spare parts and warehouse inventory tracking
- netbox-floorplan-plugin: Visual data centre floor plan representation
- netbox-attachments: Document management for vendor quotes, maintenance contracts, floor plans
The plugin framework is Django, so if you know Django, you can build custom plugins for your specific needs. Plugins can hook into NetBox’s signal system to react to changes, extend existing objects with custom attributes, and create entirely new workflows.
Plugins solve the limitations of tags, custom fields, and config contexts by providing proper data models with validation, relationships, and custom UIs. They give you complete control over your data structure and business logic. You can enforce that hello intervals are less than dead intervals, model BGP sessions as first-class objects with foreign keys to devices and IP addresses, build dropdown menus for valid values, and create custom API endpoints for complex queries.
However, plugins require significant development and maintenance effort. You’re writing Django applications, which means Python code, database migrations, unit tests, and keeping up with NetBox API changes across versions. The plugin you built for NetBox 3.5 might need code changes for 3.6, and potentially significant rework for 4.0. You need developers who understand Django, the NetBox plugin framework, and your network domain. For small teams or simple use cases, this overhead often isn’t justified.
Plugins also couple your data model to NetBox’s lifecycle. Your custom plugin data lives in NetBox’s database with plugin-specific schemas, requiring custom migration scripts if you move away from NetBox.
Use plugins when tags, custom fields, and config contexts genuinely can’t meet your needs. For most networks, a combination of config contexts for structured data and custom fields for typed attributes covers the majority of extension requirements without the maintenance burden.
The four mechanisms above focus on storing and structuring data. The next two focus on automation: reacting to changes and performing bulk operations.
4.5 Webhooks
Webhooks enable event-driven automation by sending HTTP requests when NetBox objects are created, updated, or deleted. Use them to trigger external systems like CI/CD pipelines, monitoring platforms, or ticketing systems based on NetBox changes.
Webhooks are configured in the NetBox UI (Operations → Integrations → Webhooks) or via API. You define which NetBox objects trigger the webhook (devices, interfaces, IP addresses), which events fire it (create, update, delete), the HTTP endpoint to receive the payload, and optional conditions to filter when webhooks execute.
When a webhook fires, NetBox sends an HTTP POST with JSON payload containing the event type, timestamp, username, and object data:
{
"event": "created",
"timestamp": "2025-11-23T10:30:00Z",
"model": "dcim.device",
"username": "ismael",
"data": {
"id": 123,
"name": "spine-01",
"device_type": { "model": "Arista 7280SR" },
"site": { "name": "london-dc1" },
"status": "active"
}
}You can customize the HTTP method (POST, GET, PUT, PATCH, DELETE), add custom headers for authentication, use Jinja2 templates to transform the payload, and control SSL verification. For security, include authentication tokens in custom headers that your receiving endpoint validates before processing.
Common webhook patterns:
-
Monitoring integration: Device status changes trigger monitoring system updates (add/remove from monitoring).
-
DNS/DHCP auto-update: IP address assignments trigger DNS record creation and DHCP reservation updates in your DNS/DHCP servers.
-
Config regeneration: Device changes trigger CI/CD pipelines to regenerate device configs and push to devices.
-
Slack/Teams notifications: Critical changes alert team channels (circuit decommissions, production device deletions, IP allocation conflicts).
-
Audit logging: All NetBox changes forward to centralised logging (Splunk, Elasticsearch) for compliance.
Webhooks are stateless and fire-and-forget. NetBox doesn’t retry failed deliveries by default, so your receiving endpoint needs to handle failures with its own queuing. If the endpoint returns a non-2xx status code or times out, the webhook fails silently. Check NetBox’s background task logs to debug delivery failures.
The reliability requirement means webhooks work best for triggering asynchronous workflows that can handle occasional missed events. For critical integrations requiring guaranteed delivery, consider implementing a polling mechanism that queries NetBox’s change log or using a message queue between NetBox and your external systems.
4.6 Custom scripts
Custom scripts are Python programs that run inside NetBox with unrestricted Django ORM access, enabling bulk operations, compliance audits, and provisioning workflows. Unlike webhooks that fire automatically in response to events, scripts are manually triggered from the NetBox UI (or via REST API) and can accept user input through form fields. Scripts bypass the REST API layer entirely, giving them direct database access and the ability to perform operations that might not be exposed through the API. Use them for trusted automation tasks that need to run within NetBox’s context with full permissions.
Scripts are stored in your NetBox deployment’s scripts directory and appear in the UI under Operations → Scripts. They can define input parameters using field types like StringVar, IntegerVar, BooleanVar, and ChoiceVar to create form inputs for users. The commit parameter lets users run scripts in dry-run mode first to preview changes before applying them.
Common use cases:
-
Bulk imports: Import large datasets from external sources. Example: migrate 500 devices from a spreadsheet during a data centre consolidation, validating IP conflicts and rack capacity before creation.
-
Compliance audits: Find objects missing required data. Example: generate a report of all production devices without warranty expiry dates for procurement review.
-
Automated provisioning: Bootstrap new infrastructure. Example: create a leaf switch, assign rack position, generate 48 interfaces based on device type, allocate management IP from the correct prefix.
-
Cleanup operations: Bulk delete or modify stale data. Example: remove devices in “decommissioned” status for over 90 days along with their interfaces and IP allocations.
Here’s an example compliance audit script that generates a comprehensive lifecycle report with CSV export, vendor analysis, and HTML visualization:
"""
Device Lifecycle Report Script for NetBox
==========================================
This script generates comprehensive lifecycle compliance reports for devices,
checking software EOL dates and warranty expiry dates from custom fields.
Provides multiple output formats and detailed analysis.
Required Custom Fields:
- software_eol_date: Date field for software end-of-life
- support_contract_expiry: Date field for warranty/support expiry
"""
from dcim.models import Device, Site
from extras.scripts import Script, ObjectVar, ChoiceVar, BooleanVar
from datetime import date, datetime, timedelta
from collections import Counter
import csv
from io import StringIO
class LifecycleReport(Script):
"""
Generate lifecycle compliance reports for network devices.
Identifies devices with expired or soon-to-expire software and warranties.
"""
class Meta:
name = "Device Lifecycle Report"
description = "Generate lifecycle compliance report with EOL and warranty tracking"
commit_default = False
site = ObjectVar(
model=Site,
description="Filter by specific site (leave empty for all sites)",
required=False,
query_params={'status': 'active'}
)
report_type = ChoiceVar(
choices=(
('eol', 'End of Life Only'),
('warranty', 'Warranty Expiry Only'),
('both', 'Complete Report (Both)')
),
default='both',
description="Select which lifecycle aspects to report on"
)
warning_days = ChoiceVar(
choices=(
(30, '30 days'),
(60, '60 days'),
(90, '90 days'),
(180, '180 days'),
),
default=90,
description="Warning threshold for upcoming expirations"
)
export_csv = BooleanVar(
default=False,
description="Generate CSV export of results"
)
def run(self, data, commit):
"""Main execution method for the script."""
# Initialize tracking lists
self.eol_expired = []
self.eol_warning = []
self.warranty_expired = []
self.warranty_warning = []
self.no_data = []
# Set up date calculations
today = date.today()
warning_threshold = today + timedelta(days=data['warning_days'])
# Query devices with optimization
devices = Device.objects.filter(status='active')
if data.get('site'):
devices = devices.filter(site=data['site'])
self.log_info(f"Filtering for site: {data['site'].name}")
devices = devices.select_related(
'device_type__manufacturer',
'site',
'device_role',
'platform'
)
total_devices = devices.count()
self.log_info(f"Analyzing {total_devices} active devices...")
# Process each device
for device in devices:
software_eol = device.cf.get('software_eol_date')
warranty_expiry = device.cf.get('support_contract_expiry')
if not software_eol and not warranty_expiry:
self.no_data.append(device)
continue
# Check EOL status
if data['report_type'] in ['eol', 'both'] and software_eol:
if software_eol < today:
days_expired = (today - software_eol).days
self.eol_expired.append({
'device': device,
'date': software_eol,
'days_expired': days_expired
})
self.log_failure(
f"{device.name}: Software EOL expired {software_eol} "
f"({days_expired} days ago)"
)
elif software_eol < warning_threshold:
days_remaining = (software_eol - today).days
self.eol_warning.append({
'device': device,
'date': software_eol,
'days_remaining': days_remaining
})
self.log_warning(
f"{device.name}: Software EOL expires {software_eol} "
f"({days_remaining} days remaining)"
)
# Check warranty status
if data['report_type'] in ['warranty', 'both'] and warranty_expiry:
if warranty_expiry < today:
days_expired = (today - warranty_expiry).days
self.warranty_expired.append({
'device': device,
'date': warranty_expiry,
'days_expired': days_expired
})
self.log_failure(
f"{device.name}: Warranty expired {warranty_expiry} "
f"({days_expired} days ago)"
)
elif warranty_expiry < warning_threshold:
days_remaining = (warranty_expiry - today).days
self.warranty_warning.append({
'device': device,
'date': warranty_expiry,
'days_remaining': days_remaining
})
self.log_warning(
f"{device.name}: Warranty expires {warranty_expiry} "
f"({days_remaining} days remaining)"
)
# Generate summary
self._generate_summary(data, total_devices)
# Generate CSV if requested
if data['export_csv']:
self._generate_csv_export(data)
# Vendor analysis
self._generate_vendor_analysis(data)
def _generate_summary(self, data, total_devices):
"""Generate summary section of the report."""
total_expired = len(self.eol_expired) + len(self.warranty_expired)
total_warning = len(self.eol_warning) + len(self.warranty_warning)
self.log_info("=" * 60)
self.log_info("DEVICE LIFECYCLE COMPLIANCE REPORT")
self.log_info("=" * 60)
self.log_info(f"Total Devices: {total_devices}")
self.log_info(f"EOL Expired: {len(self.eol_expired)}")
self.log_info(f"EOL Warning: {len(self.eol_warning)}")
self.log_info(f"Warranty Expired: {len(self.warranty_expired)}")
self.log_info(f"Warranty Warning: {len(self.warranty_warning)}")
self.log_info(f"Missing Data: {len(self.no_data)}")
if total_expired == 0 and total_warning == 0:
self.log_success("All devices with lifecycle data are compliant!")
elif total_expired > 0:
self.log_failure(
f"CRITICAL: {total_expired} devices have expired lifecycle components"
)
def _generate_csv_export(self, data):
"""Generate CSV export of all findings."""
output = StringIO()
writer = csv.writer(output)
writer.writerow([
'Device Name', 'Site', 'Manufacturer', 'Model',
'Issue Type', 'Date', 'Status', 'Days'
])
# Write all issues to CSV
for item in self.eol_expired:
device = item['device']
writer.writerow([
device.name,
device.site.name if device.site else 'N/A',
device.device_type.manufacturer.name,
device.device_type.model,
'Software EOL',
item['date'].strftime('%Y-%m-%d'),
'EXPIRED',
f"-{item['days_expired']}"
])
for item in self.warranty_expired:
device = item['device']
writer.writerow([
device.name,
device.site.name if device.site else 'N/A',
device.device_type.manufacturer.name,
device.device_type.model,
'Warranty',
item['date'].strftime('%Y-%m-%d'),
'EXPIRED',
f"-{item['days_expired']}"
])
csv_content = output.getvalue()
self.log_info("\n=== CSV EXPORT ===")
self.log_info(f"<pre>{csv_content}</pre>")
def _generate_vendor_analysis(self, data):
"""Generate analysis by vendor/manufacturer."""
all_issues = []
all_issues.extend([item['device'] for item in self.eol_expired])
all_issues.extend([item['device'] for item in self.eol_warning])
all_issues.extend([item['device'] for item in self.warranty_expired])
all_issues.extend([item['device'] for item in self.warranty_warning])
if all_issues:
vendor_counts = Counter(
device.device_type.manufacturer.name
for device in all_issues
)
self.log_info("\n=== VENDOR ANALYSIS ===")
for vendor, count in vendor_counts.most_common():
percentage = (count / len(all_issues)) * 100
self.log_info(f"{vendor}: {count} devices ({percentage:.1f}%)")Scripts excel at operations requiring complex business logic that spans multiple object types. You can query devices, check interface availability, validate IP allocations, create relationships, and generate detailed logs all within a single transaction. The Django ORM gives you the full power of Python with proper data validation and referential integrity.
The key difference from webhooks: scripts are Python code that executes inside NetBox with direct database access, whereas webhooks just send HTTP messages to external systems. Scripts perform operations within NetBox (create devices, modify interfaces, generate reports), while webhooks notify other systems to take action. Use scripts for operations that manipulate NetBox data directly, webhooks to integrate NetBox changes with external tools.