- AI-Enhanced Creator
- Posts
- From Concept to a Final Product
From Concept to a Final Product
Midjourney + Stable Diffusion + Photoshop
Hello Drawing Dynamos.
Last week I shared a process of interactive pitching with clients. It’s an efficient great way to understand your clients vision quickly. That process usually stops at concepts though. So how do we go from those concepts to the final product?
I often say, that AI is not a magic box, although it can often feel like one. You type words into a box and an image appears. Pretty magical. However, it rarely gives you the exact results you want. For those, you have to put in work and get creative.
On the call I ended up with this image.
Produce a colorful 3D render of a heroic knight standing firm against a menacing black dragon with glowing pink eyes in the depths of a dimly-lit cavern. The scene should be bursting with a range of pink and black hues, creating a fantastical, larger-than-life atmosphere. --ar 16:9 --v 5
The hero in this image is the biggest problem. He needs a battle pose as well as some armor. And he definitely doesn’t need a second sword protruding from his chest.
So I recreate the hero with Midjourney.
a highly detailed 3d illustration of a full body back shot of a dynamic hero pose of a knight in armor with his sword raised in the style of fantasy d&d --ar 1:1 --ar 16:9 --v 5
I could use this image and composite it directly into the original concept image. However, I add an extra step here - I run the hero image through Stable Diffusion. The images are in two different styles, so I want to transform the hero into the same style before compositing it together.
I put the original concept image into the img2img window and use the hero in the controlNET with the Canny model. This will keep the hero as the object of the image and the style from the concept image. I made a post on this workflow a while back, if you want to dive deeper.
As a result, I get this:
I then used Segment Anything by Meta, to separate my subject from the background.
All that is left is composite the images together in Photoshop, Photopea or any other image manipulating software you use. I add some shadows and adjust the lighting, so it blends better.
There is an alternative to this last compositing step - inpainting with Stable Diffusion. I’ll write about that next week.
I love creating this bridge of information and knowledge, which anyone can walk over and step into the world of AI supported creation.
I appreciate you reading and joining me on this journey. And if you enjoy these posts, why not share it with others? You can share your own referral link and as a thank you receive some free stuff from me.
There is much more to explore with AI and storytelling and I’m excited as ever to venture into the unknown with you all.
Keep creating and don’t forget to have fun. ☀