Depth map to video

With recent improvements in depth-controlled video generation, I wanted to share some samples.

Viewport + Depth render from Mixreel

Here is the viewport view from the Mixreel app, and the rendered depth map video.

Seedance 2.0 (via Dreamina)

Dreamina Seedance 2.0 does not seem strictly a depth controlled model, and seems more akin to a video reference guided model. Here, we pass the depth video as a reference.

Wan-2.2 A14B ControlNet

This is a ControlNet for Wan 2.2 trained specifically for depth maps.

LTX-2-19B Control

This is a ControlNet for LTX 2.2 trained specifically for depth maps. However, this was generated via Wavespeed (which performs depth estimation from the reference video, and does not allow you to pass the reference depth video directly).

Articles

Depth map to video

With recent improvements in depth-controlled video generation, I wanted to share some samples. Viewport + Depth render from Mixreel Here is the viewport view from the Mixreel app, and the rendered

Read more →

Controlled Video Generation

ControlNet Originally developed for Stable Diffusion, ControlNet is a fine-tuning architecture for generating images by passing both a text prompt and an additional control signal. By passing both

Read more →

July 2025 Update

It's been roughly 6 months since I started working on the first iteration of Mixreel. I've been regularly sharing progress on Bluesky , but this is first time I've put together a comprehensive update

Read more →