Nexum INSIGHTS
Stable Diffusion Sliced renderings

This project showcases a use case for Stable Diffusion in producing high-definition AI visuals on complex video surfaces. It focuses on crafting precise timecode animations that sync seamlessly with lighting and video setups, ideal for DJ intros, opening shows, and immersive live events. 

The goal is simple: create smooth, generative visuals that enhance performances, DJ intros, opening shows, and immersive live experiences while working within the real-world constraints of unconventional screen setups.

Complex mappings
Easy on standard screens, challenging on real events.

Creating AI visuals for traditional rectangular formats like phones, TVs, or standard LED walls has become relatively straightforward. Numerous online tools and models can generate those outputs reliably.

Things get significantly more complicated when screens deviate from those standard formats. Real-world stage setups introduce constraints that most AI pipelines simply aren’t designed for. Irregular shapes, rotated panels, transparency, physical obstructions, and varying pixel densities all need to be considered.

To properly explore these challenges, I deliberately pushed this project into more difficult territory.

Defining the canvas

At the core of any complex video setup sits the pixelmap an unified reference canvas defining the resolution, placement, and relationship of all display surfaces in a production. By consolidating every screen into one master canvas:

– Content alignment stays consistent
– File management becomes simpler
– Timecode synchronization risks are reduced
– Visual continuity across surfaces improves

When paired with an accurate 3D stage model, the pixelmap becomes an essential preview tool. It helps predict how digital visuals will translate into the physical environment before anything goes live.

Types of video surfaces

Setups rarely consist of simple rectangles. Here are some common complications:

Rotated surfaces – Screens may be physically rotated while still appearing differently in the 2D pixelmap, requiring compensation in content design.
Irregular surfaces – Some displays aren’t rectangular at all — they may have angled corners, cutouts, curves, or custom-built shapes.
Masked surfaces – Decor elements, stage structures, or audience viewing angles can partially obscure screens, meaning content must anticipate visual blockage.
Pixel pitch variations – Differences in pixel density can cause identical digital objects to appear at different physical sizes across screens.
Negative space – Gaps between displays usually shouldn’t be included in pixelmaps, as they waste resolution and complicate rendering unnecessarily.
Symmetry considerations – Even symmetrical setups often benefit from unique mappings, allowing flexibility for typography, asymmetrical visuals, or motion effects.
Transparent surfaces – See-through LED panels introduce another layer of complexity, blending physical depth with digital content.

Additional considerations (not part of this setup).
Some elements weren’t part of this particular project but are worth mentioning:

LED strips – Often stretched across complex 3D structures, they require specialized mapping strategies.
Motion  – Moving screens or stage elements demand accurate 3D previsualization to avoid timing or spatial issues.

EYECANDY?

AI Driven Content

Controlling Motion & Style
Guiding Motion

AI image models are not trained with pixelmaps or stage layouts in mind. They don’t inherently understand orientation, spatial continuity, or even what direction is “up” or “down” in a fragmented display setup. Because of that, creating reliable motion across multiple screens requires falling back on more classical content creation tools. In my workflow, I use software like Houdini, Fusion, Maya, and Premiere to create the “matte animation”. These tools allow precise frame-by-frame control, letting me correct motion continuity, align visuals with mapped surfaces, and ensure synchronization with lighting, stage automation, and timecode playback.

creative direction

Another challenge is creative direction. Most AI models are trained to replicate realism, while models trained on artistic datasets often struggle with consistent animation or motion. They aren’t designed to follow abstract creative direction the way a human artist or motion designer would.

For this workflow, I use AnimateDiff an older AI animation approach, but one that offers more controllability. By combining AnimateDiff with ControlNets, IPAdapters, and structured prompting, it becomes possible to guide composition, movement, and visual identity much more deliberately. This hybrid approach keeps the generative spontaneity of AI while still allowing intentional art direction and stylistic consistency across the full canvas.

Workflow & pipeline
Combining into a pipeline

Describing the full technical workflow including every tool, script, and edge case would easily fill an article on its own. Instead, this section focuses on the approach and production flow used in this project, outlining how generative AI is integrated into a reliable, event-ready pipeline.

The process starts with defining the pixelmap and matte animation. All timing, structure, and motion are created first on the full canvas, covering the entire show or separate effects. This step establishes the spatial logic of the setup early on, ensuring that motion, rhythm, and transitions already work at the stage level before any AI generation is introduced.

Once the base animation is locked, the pixelmap is broken down into individual slices. These slices are carefully cropped and framed so they fit into resolutions that AI models can handle efficiently while still allowing for high-quality output. Each slice represents a controlled window of the larger canvas rather than an isolated visual element.

From there, the pipeline moves into ComfyUI, where style and generative identity are applied. Instead of generating visuals blindly, the AI operates within predefined spatial and temporal constraints. To make this scalable, I developed custom tools that integrate the Deadline render manager, allowing the sliced renders to be distributed across multiple machines. This significantly reduces render times and makes high-resolution, multi-surface generation feasible in production environments.

After all slices are rendered, the pipeline returns to Fusion, where the entire mapping is reconstructed. Each generated segment is reassembled into its original position on the pixelmap, restoring the full spatial context. The final result can be used as a layered visual effect or as a complete, timecode show, ready to sync with lighting, media servers.

This hybrid pipeline balances the creative potential of generative AI with the predictability and control required for live events, making it possible to use AI not just as an experiment, but as a dependable production tool.

Just below and partially under the influence of another secret