Week 3: Exploring Local Large Language Models

Hello Friend

Today I want to share a look into our class - AI for Creative Leaders

If you’re looking to learn and follow our process, this is a great opportunity to do so.

It is packed and there is a lot to go through. I’ll share the journey week by week.

Hope you join us.

If you’re interested to join our classes, we have two courses open:

Advanced Comfy Course - Specialty course focused on everything ComfyUI. From installation in different environments, diving deep into interface and nodes, workflow breakdown by industry experts, creating custom nodes and deploying workflows as an API.

AI for Creative Leaders - Position yourself as the Gen AI expert in your industry. Covering areas on legal, privacy, copyright, creative process building, team training, client communication and knowledge in tools across text, image, video and voice.

We have a great group of creatives who have enrolled in the respective courses and we would love to welcome you.

A few words from Julian

To give you a glimpse of our class, I find it best to let our students share their experience.

This post is written by Julian. He is one of our students. A video editor and colourist with over 30 years of experience in high-end post-production. He is looking to integrate AI into his workflow where it actually adds value.

"Basically, you have to do all the work that OpenAI has done."

The third week of the Lighthouse AI Academy took us into the world of Local Large Language Models (LLMs) - a viable option to cloud-based AI like ChatGPT. Under the guidance of Nejc (Nates) and Luka, we uncovered the mechanics of running AI on our own machines. The big question: why would anyone use a local LLM instead of the powerful, always-online models we've come to rely on?

The answer lies in a trade-off between convenience and control.

Local vs. Global LLMs: What’s the Difference?

The session kicked off with a discussion on the key differences between cloud-based (global) LLMs and local LLMs. Nejc laid it out in simple terms: “When you use ChatGPT or Claude, you’re sending your data to a server, somewhere far away, where it’s processed and sent back to you. With a local LLM, everything happens on your own machine.”

Why does that matter? Privacy, security, and control. Cloud-based LLMs, like OpenAI’s GPT models, require an internet connection, meaning they handle queries remotely. This is great for scalability and power but raises concerns about data privacy (who sees your data?) and cost (API fees can add up quickly).

On the other hand, local LLMs run entirely on your device. This means that no data leaves your machine, making it better suited for confidential work. It also allows you to customise the way you work with your model, fine-tuning it for specific needs. And while large-scale commercial LLMs charge for API access, local models eliminate these costs. However, they come with their own challenge: hardware requirements. Running a large model locally requires significant computing power.

Another thing to remember about local models is that once they're released, you don't know if or when they'll be updated. You're working with what you have, without any guarantee of future improvements or fixes. That can be a dealbreaker, depending on your use case.

The technical landscape of local LLMs

The range of local LLMs is quite extensive, from smaller models with 1-3 billion parameters to massive ones with over 400 billion parameters. Each represents a different balance between speed and accuracy. The smaller models run faster but with less precision, while larger ones can match commercial LLMs but require significant computing power.

What makes these models distinct is their configurability. You can control how the model responds through temperature settings (affecting how rigid or creative the responses are), manage the context window (how much conversation it remembers), and use quantization (compression) to run larger models on more modest hardware. These technical choices directly influence the model's behavior and performance, but require understanding how each setting affects the output.

Why use a local LLM?

“If we know local models exist, how would you use them? Why not just use ChatGPT?”

The class had plenty of ideas. One of the biggest advantages of local models is privacy. If you’re working with sensitive legal or medical data, you don’t want it floating around the cloud. Customization is another perk. What if you could train an AI model on your own scripts, footage, or research? That level of personalization isn’t possible with commercial LLMs out of the box.

There’s also the financial side. If you interact with AI a lot, a local model might be more cost-effective than repeatedly paying for API access. But, as we quickly learned, the trade-off is performance. Large models eat up RAM and processing power.

Hands-On: Testing Local Models

After covering the theory, it was time to get our hands dirty. We tested different open-source LLMs, including:

  • LLaMA - Meta’s model, with different versions

  • LM Studio - a local interface that simplifies setup.

  • Anything LLM - a tool for running and managing local models

The goal was to replicate what we did with ChatGPT in the past: generate text, test responses, and see how local AI stacks up. Some models handled prompts well; others felt sluggish.

“Try the same prompts as last week and compare the results.” It turned out that local models could be just as capable, though sometimes slower and requiring a bit more tweaking.

The road will eventually lead to ComfyUI integration for me. ComfyUI is the open-source interface that I now prefer because of the flexibility that the user has to build workflows. The idea of using local LLMs to analyse images, write prompts, combine texts and generate images is exciting. The possibilities seem endless!

Takeaways & Next Steps

By the end of the session, we had a clearer picture of how local models fit into the AI workflow. They offer a powerful alternative to cloud-based AI, particularly for privacy-conscious users or those looking to avoid recurring costs. They require more setup and computing power, but in return, they offer a lot of control and flexibility.

Experimenting with local models is a great way to learn how they actually work. What are the strengths and weaknesses of some of them? The speed of development is very high. The competition is fierce.

I believe we will see further integration of external tools in our software of choice. For example, take Photoshop's generative fill - it's an external capability that's been integrated directly into the software, available when needed. I expect similar integrations will appear in many post-production tools.
I quite like the way that ComfyUI connects with external APIs, allowing custom configurations for different jobs and different users.

Now, depending on how open Blackmagic Design (DaVinci Resolve), Adobe (Premiere Pro) and Apple (Final Cut Pro) want to be towards external developers, this can go beyond traditional plug-ins. Time will tell soon enough!

Want to follow along our class?

Take a moment and test out local Large Language Model setup.

To get started you can watch these video guides from our Academy. They will help you install the interfaces and load models.

Use it as you would ChatGPT or any other commercial LLM.

See how it compares.

Looking Ahead: Image Generation

In our next lesson, we'll explore AI-driven image generation, moving from text-based models to creating visuals.

If you’re interested to join our classes, we have two courses open:

Advanced Comfy Course - Specialty course focused on everything ComfyUI. From installation in different environments, diving deep into interface and nodes, workflow breakdown by industry experts, creating custom nodes and deploying workflows as an API.

AI for Creative Leaders - Position yourself as the Gen AI expert in your industry. Covering areas on legal, privacy, copyright, creative process building, team training, client communication and knowledge in tools across text, image, video and voice.

We have a great group of creatives who have enrolled in the respective courses and we would love to welcome you.