Skip to main content

Walk into the production office of any major touring concert production — a Taylor Swift Eras Tour-scale operation, or even a well-funded theatrical tour — and you’ll find a role that didn’t meaningfully exist in live events twenty years ago: the dedicated show network engineer. Not a video engineer who handles networking on the side. Not an IT contractor borrowed from the venue. A specialist whose entire job, for the duration of that tour, is the design, deployment, maintenance, and troubleshooting of the production’s data infrastructure. Here’s why that role has become non-negotiable at scale.

The Networked Show: A Brief History

In the early 2000s, a touring show’s ‘network’ was largely conceptual — maybe a few DMX universes daisy-chained across the rig and a laptop plugged directly into a grandMA console for visualisation. By 2010, Ethernet-based control protocols like Art-Net and sACN were replacing DMX for lighting, Dante was beginning its takeover of audio infrastructure, and NDI was on the horizon for video. By 2018, a large production might have eight separate Dante domains, a media server cluster running disguise, an LD Systems control network, a pyrotechnic firing system on isolated Ethernet, a crew comms network, and a separate show-critical data VLAN — all simultaneously. No single department head has the bandwidth to manage that topology while also doing their actual job.

What a Show Network Engineer Actually Does

The role is fundamentally about risk management through infrastructure. A show network engineer designs the IP addressing scheme before the first truck is loaded, ensuring that lighting, video, audio, rigging control, and automation systems occupy separate, non-conflicting subnets. They spec and procure the managed switches — typically Cisco Catalyst 9000 series or Luminex GigaCore units designed specifically for stage environments — and configure VLANs, QoS policies, storm control, and redundant uplinks that protect show-critical traffic from disruption.

On show day, they’re the person who catches the Dante audio dropout before it becomes a dead monitor mix, identifies the IP conflict between a rental grandMA3 full-size that shipped with factory defaults and the tour’s existing addressing, and troubleshoots why the disguise cluster heartbeat is throwing warnings four minutes before doors. They’re also frequently the person who interfaces with venue IT to negotiate VLAN access on house infrastructure, navigates firewall rules, and prevents well-meaning venue staff from plugging an unauthorised device into the production network.

The Tools of the Trade

The modern show network engineer’s kit list reads like a small-scale data centre deployment. Beyond the managed switches themselves, expect to find fibre media converters for long runs — Moxa or Veracity units are touring favourites — along with SFP modules in various wavelengths depending on fibre type. A network tap and a laptop running Wireshark is always present for packet analysis. PTP grandmaster clocks — often a Meinberg LANTIME or the Luminex GigaCore’s built-in PTP capability — synchronise time-sensitive protocols across the rig.

Software tools include SolarWinds or PRTG Network Monitor for real-time traffic visibility, Dante Controller for audio routing, and custom Python scripts that some engineers write to automate switch configuration across multi-venue deployments. The goal is a repeatable, documented network configuration that can be deployed in four hours regardless of which city the tour wakes up in.

Why Department Heads Can’t Double Up

The honest answer is workload and expertise depth. A video systems engineer managing a disguise cluster across a dozen outputs cannot simultaneously be running Wireshark packet captures to debug multicast flooding on the venue’s unmanaged switch. A FOH audio engineer chasing a ground loop through the Dante network during soundcheck has neither the time nor typically the training to reconfigure IGMP querier settings on a Cisco switch. The show network engineer exists because the networks are now too complex, too critical, and too cross-departmental for any single specialist to absorb as a secondary responsibility.

Redundancy by Design

Network engineers on major tours build redundancy into every layer. Spanning Tree Protocol (STP) or its faster cousin RSTP creates automatic failover paths if a switch fails. LAG (Link Aggregation Groups) bond multiple physical uplinks for both bandwidth and redundancy. VRRP (Virtual Router Redundancy Protocol) provides automatic gateway failover. On Dante networks, the secondary Dante interface provides a hot-spare audio path that activates within milliseconds of a primary link failure. These aren’t theoretical protections — on a two-hour live show with no breaks, they’re what separates a recoverable incident from a broadcast-level failure

Budget Reality: When Can You Justify the Role?

A dedicated show network engineer becomes economically justifiable — and arguably mandatory — once a production crosses certain thresholds: more than four Dante domains, a multi-machine video cluster, automated rigging on a control network, or any system where a network failure stops the show. Below those thresholds, a well-configured Luminex GigaCore 14R with a solid VLAN template managed by an experienced video or systems engineer may suffice. Above them, the cost of a dedicated specialist is trivially small against the cost of a show stoppage.

 

Leave a Reply