Consistency in AI Visuals

New download

Hello Diverse Dimensions Crew.

I’ve made a new PDF on todays topic - Consistency.

I’ll write more about it at the end of this post and if you can’t wait, you can find it here.

Let’s get to it!

In Stable Diffusion there is a very convenient ControlNET model called Reference.

We can upload an image to it and it will use it as a reference to create new ones. The model will attempt to include the elements from the reference image in the new one.

For more control over our final image, we can use a second ControlNET unit with a canny or depth model.

The reference image will be applied to the depth map of the second image.

We can write the prompt, adjust the aspect ration and any other settings we want. And we generate an image.

prompt - Fairy Queen

This can be a bit of a hit or miss, as AI is using only one image to generate new ones.

Especially with very different angles and compositions. I suggest using reference images for depth/canny controlNET made in the same style and physics.

Stable Diffusion will attempt to apply the character reference on the composition, but if the styles are different, it will not look consistent.

This is a look into one of the techniques inside the new PDF - Consistency in AI visuals.

It has three step by steps technique going from basic to advanced:

  • Midjourney + InsightFace

  • Stable Diffusion with Reference model

  • Combining custom LORAs for ultimate consistency control.

Hope you enjoy exploring new techniques in your own creative work.

You can stop by my website or social media and as always,

Keep creating and don't forget to have fun. ☀