- AI-Enhanced Creator
- Posts
- Enter Lovis: You Won’t Find This Workflow on YouTube
Enter Lovis: You Won’t Find This Workflow on YouTube
Our Next Cohorts Are Open: Ready to Master AI?
Hiya Friend
It’s Adam again, writing to you from Cape Town, where the winter chill has finally crept in and my morning coffee feels more like a survival tool than a carefree beverage.
In the last newsletter, I had the honor of introducing myself and the journey I’m on with Lighthouse AI Academy.
Today, I want to show you where that journey can take you — but only if you’re ready to go beyond surface-level prompts and start building workflows that clients and studios actually need.
We’re about to pull back the curtain on a workflow that quietly powers real VFX, architecture, and advertising work.
Not a tutorial you’ll find on YouTube. Certainly not another TikTok trick.
This is production-grade AI straight from one of our mentors who uses it every week with top-tier clients.
Before we get into it, a quick heads-up:
New Cohorts Are Open For:
Advanced ComfyUI (Cohort 2) — Starts 21 August
→ For building workflows and deploying real-world tools.
→ Ideal for creative technologists, VFX leads, devs, and directors.
→ Apply to join now
Creative Leaders (Cohort 3) — Starts 9 September
→ For mastering AI’s creative, ethical, and strategic dimensions.
→ Ideal for directors, producers, team leads, and educators.
→ Apply to join now
If this workflow sparks something in you, then our cohorts are waiting to show you more of this magic.
Now, let’s dive in! 🏊
Lovis’ 3D-to-AI Workflow, Step-by-Step
Professional creative work doesn’t reward randomness.
It demands precision, repeatability, and control. You can’t just hope the AI gets it right — you have to guide it. And how awesome is that? Otherwise, we wouldn’t have any creative control!
That’s why we’re sharing a peek inside a real workflow built by Lovis Odin, one of our mentors at Lighthouse AI Academy.
He works with leading studios to build pipelines where AI integrates directly into production, not as a gimmick, but as a tool for scaling creativity with consistency.
This workflow is a full production pipeline, showing you, the creators, how to:
– Generate a new asset with Flux
– Generate a 3D model from the image
– Place the 3D model within the given scene
– Inpaint the assets with style and texture using inpaint Flux Fill
– Generate a video with Wan using the new image as the inpainted frame
Here’s the breakdown 👇
The Technical Breakdown (In 5 Steps):
1. Generate a New Asset with Flux
The workflow begins with a clean, foundational asset — an image usually created using the Flux model.

Flux provides a high-quality "first pass" of the desired object, e.g., a building. These assets are typically generated on a plain white background to simplify the next stages: Easier masking and better AI interpretation of shape and detail.

2. Generate 3D Model from the Image of the Asset
This stage starts with our asset generated in the previous step. The image is rendered into a 3D model using Hunyuan 3D.

From the image, a depth map or normal map is generated (3D data generation). These maps give the AI a clear understanding of the object’s structure — what’s near, what’s far — ensuring depth and geometry are preserved during stylization/texturing processes.

Along with baked in texture


Which results in a generated 3D model along with the appropriate texture coming from the original image.
![]() | ![]() |
The workflow includes a step to preview and verify this map before continuing.
3. Place the 3D Model in the Scene
With the rendered asset and its depth data ready, the model is conceptually placed into its intended scene. This scene is the final composition where the creative work will happen, setting the context for the texturing and animation to follow.

4. Inpaint the Asset with Style and Texture using Inpaint Flux Fill
This is the creative heart of the workflow — transforming the asset with new textures and styles through a refined workflow within ComfyUI.
Here’s what it includes:
Original Image: Image with the positioned 3D model is used as a starting point into which we will composite the asset with texture and lighting.
ControlNet: Takes in the depth map, helping AI to enforce structural accuracy.
Style Reference: New aesthetic is introduced — either a reference image through Flux Redux or a custom LoRA for specific styles.
Masking: Applied to isolate effects/changes strictly to the asset, not the background.
Denoising Strength: Gives the creator granular control over final aesthetics: Adjust denoising strength to balance realism with stylization — low keeps more of the original, high allows greater transformation through giving more creative freedom to the AI.
All of this is powered by Flux Fill, which Lovis describes as "intelligent inpainting," calling it “the best model for inpainters.”

5. Generate Video with Wan using the New Image as the Inpainted Frame
The final step applies the new scene to the whole video using the Wan model. The process is designed for consistency and stability, avoiding the flickering seen in many AI-generated videos.
Here’s how it works:
First Frame Anchor: The HQ inpainted image from the previous step becomes the starting frame, guiding the whole animation.
Masking for Motion: A mask is created to control the animation, defining which aspects should be modified. Lovis uses a “point editor” node to precisely define inpaint areas while keeping the rest static.
Video Generation: The AI produces subsequent video frames based on the anchored image and mask as a guide for motion. Lovis refers to this as a “video reverse” process, where detailed input gives the AI incredibly strong, clear direction for more stable, more believable animation.

If this has piqued your curiosity, it gets better because:
This entire sequence can be built as one cohesive workflow in ComfyUI.
And we’d love to show you how in our Advanced ComfyUI course.
Homework Time: What Can You Rebuild with AI?
Every creative has a bottleneck.
Maybe it’s the hours you lose tweaking renders.
Maybe it’s the back-and-forth with clients who don’t “see it” yet.
Or maybe it’s the uncertainty, wondering if your AI outputs will hold up in a real production environment.
This week, we’d like you to take 10 minutes to reflect on:
What part of your process could benefit from control and consistency?
Where would a structured workflow like Lovis’ save you the most time, or unlock the most creativity?
If you can identify that, you’re already halfway to solving it.
The other half? Learning how.
And that’s where we come in. 😎
Why This Matters
This is how professionals use AI.
Not merely to explore, but to deliver.
And it’s also exactly what we teach.
Inside the Advanced ComfyUI course, you’ll learn workflows just like this — plus how to build them into tools and apps.
In AI for Creative Leaders, we’ll show you how to scale these methods across teams, campaigns, and entire production pipelines.
It’s time to move beyond the prompts, people. 🌞
--------------------------------------------------------------------------------
That’s it for now: Thanks for reading and building this new era with us.
Step by step, node by node — and away we go!
Keep creating and always remember to have fun.
— Adam & the Lighthouse AI Academy Team ☀️