- AI-Enhanced Creator
- Posts
- AI supported Animation + FREE download
AI supported Animation + FREE download
Runway Gen-1
Hello, Animation Admirers.
I’ve been looking forward to making some AI supported animation and there have been some fun workflow developments recently!
It started with Corridor Crew releasing their animated film Rock, paper, scissors (you should watch it, it’s really good.).
The used a process of running live recorded footage through a stable diffusion style filter. It’s much more complicated than it sounds. Aside from shooting everything live action, they trained 3 different custom models, ran each individual frame through Stable Diffusion and rendered it out.
With a 24 frames pers second recording, you can imagine, that is A LOT of images and requires a lot of rendering capabilities.
I’ve used a similar, stripped down workflow myself and it works well. If you don’t mind your computer raising the temperature inside your room for a few hours.
As an alternative, I’ve been using Runway’s Gen-1.
This allows me to upload a video + a reference frame to a discord bot and it will apply the reference frame on top of the video. We can also add a text prompt instead of uploading an image.
It has limitation on the control of the outcome as well as a 3 second video limit output.
3 seconds might not sounds much, BUT with a bit of creative storytelling, we can create some interesting things, very quickly.
As an example, I made this short animated story in a couple of hours with a very simple workflow.
I started with a voiceover of a story I am writing and added music to create an atmosphere. I used Eleven labs to create the voiceover and found music on Epidemic sound.
Second step was creating storyboard frames with Midjourney. I created a sequence of images that would visualize the voiceover.
With this I moved onto moving images. I looked for stock footage that would fit the storyboard frames. With the 3 second output limit of Gen-1, I cut the footage to fit this limitation.
I then upload the video and the storyboard frame to Discord and wait for it to render.
I can change a few parameters like sampling steps and cfg scale and even create a green screen cutout of my objects.
I repeat this for every storyboard frame and put it all together.
A few extra tips for this workflow:
Finding stock footage that fits the story can be difficult, so recording myself or using a 3d model animation can be much better and gives more control over the outcome as well.
Once we have the video, we can run the first frame through Stable Diffusion and change the style, design or concept of it. This will make it easier for gen-1 to match the new style over the original video, compared to uploading an unconnected style frame.
A lot of exciting things happening that I would like to highlight!
First is a conversation I had with Brian Sykes. We talked about how we started creating with AI, how we learn and explore, role of teaching and sharing, and much more.
You can listen to it here.
Secondly, I would like to highlight a fellow AI newsletter, which brings news and information on biggest AI developments each week. If you’re looking to keep up to date, This Week in AI might be for you.
Thirdly, I’ve made a new product - Prompt Engineers Library.
It’s an overview of how I stay organized when using text2img AI tools. It contains a Notion template as well as a Discord server template you can copy and adjust to your own needs.
It’s free, or not if you choose to support my efforts in making these. 💛
You can find it here.
You can also stop by my website or social media and as always,
Keep creating and don’t forget to have fun. ☀