Skip to content
Arrow
All posts

Feb 9, 2024 · 2 min read

Behind the Scenes of "DJ Wiggles" - AI-Generated Short Film for the Runway Gen:48 Competition

Behind the scenes of the AI content generation process we used for our Runway Gen:48 short film competition submission: Disco Dachshund DJ Wiggles.
 
Last Saturday at 9am EST we were given 48 hours to go from a brief to submission. This included writing a script, assemble shot lists, draft 500 text-to-cinematic-video prompts, generate 22 minutes of AI video/animation, in 8 unique visual styles, record voice-over, edit to music, and add titles. We submitted with two hours spare.

Rough timeline:

The first hour was spent carefully studying the brief requirements and making notes of the rules and musts. I knew I wanted to do something fun with music and visuals, and figured I can use the competition to also generate a bunch of free new visuals for touring music producer Victor Tapia. Victor is also a ✨ Vibes + Logic 🤖 advisor and one of my friends + neighbors in SF, he owns a dachshund like me, and since dogs were on the Runway Gen:48 "menu" for submissions we went with a story about a music producer leveling up in the world.

Next two hours I focused on prompting and ideating short film story ideas with OpenAI's ChatGPT. It was as simple as “come up with 5 ideas about ____,” “combine ideas 1 and 5, add time travel, keep magic shoes, “now we’re going to add scenes that describe a location, key object, action,” “include a well known 70s venue.”

Next, I started test prompting to collect visual “samples.” I kept a Notes page on the side open where I was pasting RunwayML styles, prompts, descriptor snippets, and Seed IDs that produced the “looks” I was after so I can reproduce more desirable and predictable results. 

Jumping into RunwayML I wanted to test if everything could be done in their editor, got pretty far along. Did the tutorials in about 2h while eating breakfast and hacked a way to build a storyboard animatic inside the app.

Next I gave ChatGPT a format/framework on how to generate scenes and shot descriptions, then directed it to organize by scene, shot, and write "final" prompts for each shot. Before I went to lunch I blasted off some 500 prompts, which generated 22 minutes music video visuals.

The audio tools weren’t there yet so on Sunday I exported everything and decided to move editing to Adobe Premiere where I timed everything the best I could without final audio. Victor delivered final music and VO on Sunday at 7pm, I started the edit, probably spent too much time timing titles, and submitted by 7am in the morning.

All of this happened while I was on vacation in Mexico, much of it from my iPhone. And even though I’m in trouble with my wife, this was fun and we learned a lot. In the end it's always about the journey.

We now have an AI Film available as content on the free V+L Lumens AI app.