Putting Characters into Images with AI

Inpainting with Stable Diffusion

Hello Artistic Adventurer.

I’ve been taking you through the process of creating concept visuals for clients with the help of AI.

First step was interactive client pitch, where I created concepts in real time together with the client.

Main software here was Midjourney set up in a private discord (Click here if you want to learn how to set that up).

Main software here was Stable Diffusion + Photoshop

The third step is adding details to them. Which is what we’ll cover today.

Main software will be Stable Diffusion + Photoshop.

This is going to be a good one. 

Let’s start creating!

PS: I added a referral program at the bottom of the newsletter. Refer 1 friend, and I’ll send you the book AI Explore: Collaborations 1 for free. If you enjoy these posts, why not share them, right? 😁

Inpainting is an incredible tool within Stable Diffusion. Especially when combined with ControlNET.

I use it for two use cases.

First is putting characters and objects into the image.

I start with this image made in Midjourney.

The character is not how I want it to be. So I will composite a new character on the image. I created the character in Midjourney, cut it out with SAM and used Photoshop to put the images together.

The image might look pretty good already, but the character doesn’t fit naturally in the scene. The lighting as well as the colors are off. The colors are also not on brand with the client.

Fixing the brand colors can be done in photoshop, by adjusting the hue. In my case I adjust the yellow colors to pink.

Now it’s time for AI to help.

I will put this image into Stable Diffusion img2img - Inpaiting tab.

I mask the knight in the image and adjust the settings:

  • original will keep closer to the original style, Latent noise will generate a completely new image.

  • only masked - new image generation will only affect the masked area.

  • denoising strength - the higher the number the more the original image will be modified.

I put the same image in ControlNET with the canny model, which will create outlines from the original image.

With this setup I write my prompt:

highly detailed 2d illustration of a knight holding a torch, in the style of fantasy d&d with dark black and bright pink colors

and generate a new image.

It might not look like much, but details are what create a cohesive storytelling experience.

Which is the second way I use Inpainting - Adding details.

I have this image of a wizard that needs some fixing.

The face is messed up and the robe details are too soft and bland. I will use the same technique as in the previous example and mask out each detail individually and write a prompt to generate these details.

And I have a refined new image.

Combining different tools together, AI or non AI, is incredibly powerful. It allows us to create in ways we haven’t been able to create ever before.

Be curious, open, play around and explore. Take advantage of these opportunities.

I Hope you’ve been enjoying theses AI supported workflow breakdowns.

I love creating and sharing my process with you adds an extra layer of enjoyment to everything. So much in fact, I’m compiling them all into an e-book! It’ll be packed with workflows I’ve been exploring in the last 8 months. Broken down with examples and different use cases. A 2D artists guide to AI supported workflows. Excited to share that with you in the coming weeks. 😁

If you do enjoy these posts, why not share it with others?

Create your own referral link, invite a friend and I’ll send you a gift.

You can also stop by my website or social media and as always,

Keep creating and don’t forget to have fun. ☀