// studio.init()
model: diffusion + language
output: moving image
status: generating

AI
Cinematics
Studio

We make films with artificial minds. We publish the methods. We believe the future of cinema is stranger than anyone imagines.

// filmography

Title Year Duration Model Status
Synthetic Dawn
Synthetic Dawn
Archive footage from the 1920s, retrained through diffusion. A meditation on what persists.
202517 min SDXL + LLaMA-3 Festival Circuit
The Latent Space
The Latent Space
A documentary about the interior of a generative model, told from within.
202424 min Stable Video Available
Diffusion
Diffusion
A three-channel installation piece. Noise becomes signal becomes image becomes noise.
20248 min loop Custom pipeline Installation
Hallucination No. 4
Hallucination No. 4
When the model fails in ways that feel intentional.
20236 min DALL-E + GPT-4V Available
Emergent
Emergent
Behaviors the model was not designed to produce.
202311 min ControlNet Available

// process

STEP_01 — Prompt engineering

We write before we generate.

Every film begins as a text. Narrative structure, emotional arc, and visual grammar are specified before any model is run.

STEP_02 — Model selection

Different problems, different architectures.

We choose and combine models based on the specific visual and temporal requirements of each project. We build custom pipelines when needed.

STEP_03 — Iteration and curation

Generation is not output — it's material.

We generate at volume and select. The editorial eye is human. The production machinery is not.

STEP_04 — Post and sound

The frame is not the film.

Temporal coherence, color, and sound design are handled in post. We do not rely on generation for final quality.

Collaborate.

Research partnerships, commissioned work, and festival inquiries. We are particularly interested in projects that push the limits of what generated imagery can say.

hello@tlp.studio