From image to 3D animation

3 steps to custom 3D character animations

Hey Animation Architect

Thank you for being a subscriber. 💛

Classes are starting next week and I’m super excited!

We'll start with setting up Stable Diffusion (locally, with cloud services, different interfaces, one-click installer) on Monday.

On Tuesday we will go over the basic creation workflow. And next week we start with the first project focused on Visual Storytelling.

If you’re interested in joining any (or all) of these sessions, reply to this mail and I’ll send you the details and links. 😁

Looking forward to creating and learning together.

Now, let’s get into today’s topic - From image to 3D animation. Which starts with Stable Diffusion.

This is the workflow we’re gonna go through today:

  • create a character

  • create a 3D model

  • Rig and animate the model

Let’s start by creating our character.

To rig a 3D model, the best pose is a T-pose. We can use PoseMyArt to position our character, grab a screen, and put it into ControlNET.

Now we can prompt our character on top of this pose. We can choose a model for the style as well as additional Loras for consistent elements in our image - like character, accessories, clothes, etc.

Red haired female rogue, wearing a hood, full body shot, with worn out clothes

When you have your character, upload the image to 3D CMS to convert it to a 3D model.

With the free plan it will take some time to render and the quality will not be the highest. It does the job though.

We can download our model in the OBJ format and move to the final step.

Rigging and animating the character.

We will go to Mixamo, which is a library of 3D models and animations. It also allows us to upload our own model and use the preset animations with it.

Upload the file you downloaded and use the Auto rigger system to rig the 3D model.

Let it process and you’ll have a 3D model ready to be used with the animations available on the site.

You can download the animation for 3D software like Blender, oooor you can screen record the animation and composite it on top of your footage.

Either way, there are a lot of fun ways to combine this into a visual storytelling workflow. And most of the work is done by AI assistants, which gives us the time freedom to work in parallel on something else.

The output quality is not incredible, so just like with the rest of AI software, I definitely recommend manual retouching if you want a high-quality results.

I hope today’s post inspired you to create something new. I’m always excited to combine a fresh story with new AI workflows.

If you’re itching for more workflows, you can join any of the Stable Diffusion classes I will be running in the next month. You can find the schedule here.

If you have questions, reach out at any time. My messages are always open.

You can also find written guides here, if that is your preferred learning method.

As always, keep creating and don’t forget to have fun. ☀