Skip to main content

Low bandwidth environments are the silent killers of live video production. You can spec the most powerful media server on the market — a disguise gx 2c or a Green Hippo Hippotizer Karst+ — and still watch your carefully crafted show collapse because the venue’s IT infrastructure was built for checking emails in 1998. This is the reality touring video departments face more often than not, and it demands a level of pre-production discipline that most clients never see.

A Brief History: When Video Went Network

The shift from SDI (Serial Digital Interface) point-to-point cabling to IP-based video distribution began accelerating in the early 2010s. By 2015, protocols like NDI (Network Device Interface) from NewTek had entered the mainstream production vocabulary, promising the freedom to route 1080p video over standard gigabit Ethernet. The problem? Not every venue updated its infrastructure alongside the industry’s evolution. Many theatres, hotel ballrooms, and convention centres still run on switches that choke under multicast traffic or lack proper IGMP snooping configuration — the network equivalent of trying to push a freight train through a garden gate.

The Bandwidth Math Nobody Wants to Do

Let’s get concrete. A single uncompressed 4K/30fps video stream requires roughly 12 Gbps of throughput. Even compressed NDI streams demand between 100–250 Mbps per channel. Run four simultaneous feeds for a show using Resolume Avenue 7 or Notch blocks inside disguise, and you’re looking at a 1 Gbps sustained load minimum — before you account for OSC control data, timecode, sACN lighting, and whatever else is riding the same physical network. In a venue with shared infrastructure and no VLAN segmentation, that’s a recipe for dropped frames mid-chorus.

Pre-Production: The Venue Survey Nobody Skips Twice

Experienced video engineers don’t walk into a venue without a network topology audit. Tools like Wireshark, iPerf3, and Angry IP Scanner are as essential as a tape measure. The questions to ask venue IT teams include: What’s the backbone — Cat6A or fibre? Are the switches managed or unmanaged? Is QoS (Quality of Service) configured? Can you get a dedicated VLAN for show traffic? Can you disable STP (Spanning Tree Protocol) on production ports? These aren’t aggressive demands — they’re the baseline for a show that works.

Workarounds That Actually Work in the Field

When the venue network is genuinely beyond redemption, the most reliable solution is show-specific network independence. This means rolling in your own managed gigabit switchesCisco Catalyst or Netgear M4300 series are touring favourites — along with your own fibre backbone if the run distances require it. Companies like VER, PRG, and Screenworks have entire departments dedicated to deploying temporary production networks precisely because venue infrastructure can’t be trusted.

For shows where low-latency video transmission is essential — IMAG applications on large corporate events or broadcast-adjacent productions — SMPTE 2110 over dedicated dark fibre is becoming the gold standard. But for productions operating on tighter budgets, compressed NDI with aggressive QoS policies and stream redundancy via SRT (Secure Reliable Transport) protocol can bridge the gap effectively.

Software-Side Bandwidth Management

Inside the software stack, discipline matters. Resolume users should leverage output composition resolution carefully — running a server at 8K canvas when output is 1080p burns unnecessary resources. In disguise, the multi-machine workflow with d3 Sockpuppet handles sync across machines but demands clean PTP (Precision Time Protocol) on the network. Misconfigured PTP is frequently the hidden cause of frame-sync drift in multi-server rigs.

Similarly, video codecs for playback assets should be chosen with bandwidth in mind. HAP codec — developed by Vidvox and Tom Butterworth — has been a touring standard since around 2013 precisely because it offloads decoding to the GPU and keeps disk read speeds manageable. NotchLC extends this philosophy with better compression ratios for Notch-driven content. For streaming pipelines, H.265/HEVC offers roughly double the compression efficiency of H.264/AVC at comparable quality, making it the smarter choice when bandwidth is the constraint rather than CPU budget.

The Touring Reality: Adapt or Fail

The venues that give video departments the most grief are rarely the big arenas — those have proper infrastructure and dedicated venue IT staff. The real challenges come from the 2,000-capacity theatre that hasn’t upgraded its network since installing ADSL broadband in 2007, or the outdoor festival site where the only connectivity option is a 4G bonded cellular router from a company like LiveU or Peplink. These are the environments that build real-world video engineering instincts.

The practical takeaway for any video systems engineer is this: never trust what a venue says about its network until you’ve run iPerf3 through it yourself. Budget for contingency bandwidth solutions in every show quote. And document every workaround — because the industry’s institutional knowledge around low-bandwidth video production is built by people who’ve solved these problems at 11pm the night before show day and were smart enough to write it down.

Looking Forward: AV Over IP Standardisation

The AIMS Alliance and the SMPTE ST 2110 standard are pushing the broadcast and live event industries toward a unified IP video transport framework. As 25GbE infrastructure becomes more affordable and venues begin modernising, the bandwidth ceilings that define today’s touring challenges will gradually lift. Until then, the engineers who understand both the technology and the workarounds will remain the most valuable people on any large-format video production.

 

Leave a Reply