AI news
February 12, 2025

This AI Tool Replaces Actors In A Film

Tencent's open-source AI video generator, Hunyuan, has just integrated Low-Rank Adaptation (LoRA) support.

Jim Clyde Monge
by 
Jim Clyde Monge

Lights, camera… algorithm?

The silver screen is no longer the exclusive domain of flesh-and-blood actors, all thanks to the recent advancements in AI. These days, it’s remarkably easy to generate videos in which actors do or say things they never actually did or to take a clip from a film and replace the actor’s face with someone else’s.

Tencent’s open-source AI video generator, Hunyuan, has just integrated Low-Rank Adaptation (LoRA) support, which means you can now train custom styles, characters, and movements, making your AI videos truly unique and personalized.

Hunyuan was launched in December 2024 and quickly impressed the AI community with its 95.7% visual quality score, outperforming many of its competitors. Now with LORA integration, it’s even more powerful. This free, open-source AI video generator packs the punch of pricey options like OpenAI’s Sora, which, by the way, can cost you as much as $200 per month.

How Do These AI Tools Work?

There are three ways to generate a deepfake video of an actor.

  1. Text-to-video: You can use a fine-tuned image model with images of the actor. Simply describe the video you’d like to generate, and the AI will render a clip of that actor guided by the description.
  2. Image-to-video: If you don’t have an image model trained with image samples, you can also use an image of an actor and turn it into a video. This solution is offered by popular platforms like Kling AI, Runway, and Pika Labs, etc.
  3. Video-to-video: Using an existing video clip and replacing the face of the actor with another character is also possible. This is probably the most effective way to produce a deepfake of an actor in a movie clip.

Let’s look at text-to-video in practice. Using a model that’s been trained on Keanu Reeves’s photos as John Wick, you could prompt:

Prompt: John Wick, a man with long hair and a beard, wearing a dark suit and tie in a church. He has a serious expression on his face and is holding a gun in his right hand. The scene is dimly lit, creating a tense atmosphere.
AI Can Now Replace Any Actor In A Film. Prompt: John Wick, a man with long hair and a beard, wearing a dark suit and tie in a church. He has a serious expression on his face and is holding a gun in his right hand. The scene is dimly lit, creating a tense atmosphere.
GIF from CivitAI

This is super cool. It looks like a genuine outtake from the John Wick franchise. Even the people in the background still look so real.

Alternatively, with video-to-video, you could feed an existing movie clip into the same model and ask the AI to replace the lead actor’s face with John Wick’s.

Here’s the result:

Just look at how seamless and realistic the deepfake video is. I’ve seen a bunch of video-to-video models in the recent months, but this is by far the best in terms of quality.
GIF from @allhailthealgo

Seeing this in action for the first time is shocking in the best possible way. The realism is close to mind-blowing, especially when the AI nails the subtle movements in an actor’s expression. Hair, lighting, shading—it's all getting better with every new version of these models.

I’ve seen a bunch of video-to-video models in the recent months, but this is by far the best in terms of quality.

Downloading Models and Running Workflows

If you’re interested in experimenting with these methods, you can find the video model used in the John Wick demo on CivitAI. However, the workflow is not yet publicly available. You’ll need to piece it together on your own or wait for official documentation.

If you have a powerful Mac, you can attempt to run the workflow with this setup:

  1. Pinokio
  2. Hunyuan Video
  3. CivitAI John Wick LoRA
If you have a powerful Mac, you can run the workflow on your machine with this setup: Pinokio + Hunyuan Video + CivitAI John Wick LoRA.
ComfyUI workflow for CivitAI John Wick LoRA

In my experience, it’s not too difficult to get everything working if you’re comfortable with basic command-line operations, Docker, or local AI dev environments. Still, the learning curve can be a bit steep if you’re totally new to AI model deployment.

How to Train Your Own LoRA for the Hunyuan Video Model

Okay, now let me show you how you can train an AI model with your own video input. We’ll be using the zsxkib/hunyuan-video-lora model from Replicate.

Okay, now let me show you how you can train an AI model with your own video input. We’ll be using the zsxkib/hunyuan-video-lora model from Replicate.
Image by Jim Clyde Monge

Two important parameters you need to fill in are the destination model and the input video files.

  • Select a model on Replicate that will be the destination for the trained version. If the model does not exist, select the “Create model” option, and a field will appear to enter the name of the new model.
  • A zip file containing videos that will be used for training. If you include captions, include them as one .txt file per video, e.g., my-video.mp4 should have a caption file named my-video.txt. If you don’t include captions, you can use autocaptioning (enabled by default).
Two important parameters you need to fill in are the destination model and the input video files. Select a model on Replicate that will be the destination for the trained version. If the model does not exist, select the “Create model” option, and a field will appear to enter the name of the new model. A zip file containing videos that will be used for training. If you include captions, include them as one .txt file per video, e.g., my-video.mp4 should have a caption file named my-video.txt. If y
Image by Jim Clyde Monge

Upon creation, you will be redirected to the training detail page, where you can monitor your training’s progress and download the weights and run the trained model.

In the prompt string, describe the kind of video you want to see. In the example below, the input videos are from BlackPink’s Rose.

Prompt: In the style of RSNG. A woman with blonde hair stands on a balcony at night, framed against a backdrop of city lights. She wears a white crop top and a dark jacket, exuding a confident presence as she gazes directly at the camera
Prompt: In the style of RSNG. A woman with blonde hair stands on a balcony at night, framed against a backdrop of city lights. She wears a white crop top and a dark jacket, exuding a confident presence as she gazes directly at the camera
Image by Jim Clyde Monge

Once all the parameters are set, click on the “Boot + Run” button and wait for the final video to be generated. The 2-second clip below was generated in 1.5 minutes.

Prompt: In the style of RSNG. A woman with blonde hair stands on a balcony at night, framed against a backdrop of city lights. She wears a white crop top and a dark jacket, exuding a confident presence as she gazes directly at the camera
GIF from Replicate

It looks awesome! The final result could be a somewhat blurred but eerily realistic shot that captures Rose’s likeness and even some correct shadow casting.

This model costs approximately $0.20 to run on Replicate, or 5 runs per $1, but this varies depending on your inputs. It is also open source, and you can run it on your own computer with Docker.

Better yet, the model is open source, so if you prefer, you can run it locally on your own machine using Docker. For those looking to keep costs down or maintain full control of their pipeline, that’s a huge plus.

My Personal Take on Training LoRAs

Training LoRAs can feel surprisingly straightforward. Before LoRA was popularized, you had to train huge segments of a model, which required time, money, and powerful hardware.

With LoRA, you only train a small matrix (or a set of low-rank matrices) inserted into the larger model. This means drastically fewer parameters, drastically shorter training times, and drastically reduced computational overhead.

LORA fine-tunes the cross-attention layers (the QKV parts of the U-Net noise predictor). (Figure from Stable Diffusion paper.)

Even though you can train these smaller “adaptation heads” quickly, the results often look almost as good as a full model training session. It’s this combination of efficiency and quality that gets me excited about where the industry is heading.

We’re at a point now where a single developer, or even a motivated hobbyist, can create specialized and high-quality AI that used to be the domain of big tech companies.

Moreover, LoRAs give developers more creative control. If I'm training a model to replicate a particular aesthetic style, like a painterly look or a vintage film appearance or capturing the subtle facial structure and expressions of a certain celebrity, LoRA-based training consistently delivers.

It’s efficient, it’s modular, and it’s quite cheap to do.

Ethical implications

Now, let’s talk about the ethical implications of this deepfake tech.

A famous example that predates the current AI wave is Peter Cushing being resurrected through CGI in 2016’s Rogue One: A Star Wars Story.

Peter Cushing originally portrayed Grand Moff Tarkin in Star Wars: A New Hope (1977) but passed away in 1994. Since Rogue One leads directly into A New Hope, Tarkin’s character was considered integral to the storyline, prompting producers to digitally bring Cushing back.

This decision ignited a debate: Is it ethically acceptable to feature a deceased actor in a new film, using digital technology, without their explicit consent?

There was one problem, however: Peter Cushing died in 1994. The film producers were faced with an interesting problem and ultimately decided to use CGI to digitally resurrect Cushing from the grave to reprise his role as the Imperial officer.
Real and AI-generated photo of Peter Cushing

Some argue that it preserves continuity and pays homage to beloved characters. Others contend that it infringes on an actor’s right to control their image and legacy.

If an actor never consented to appear in a certain story or context while alive, does the studio have the moral right to digitally resurrect them?

These issues extend beyond the grave. What about actors who are still alive but don’t want to participate in a specific project? Historically, lookalikes have been used in cinema or satire to depict famous figures.

Today, AI can generate a near-perfect replica of someone’s face and voice, effectively removing any barrier to forced participation. Contracts about image use, intellectual property, and moral rights will all need major overhauls to keep up with this reality.

In many jurisdictions, there are laws about “post-mortem publicity rights,” which allow heirs to control how a deceased individual’s likeness is used commercially. But laws vary widely across countries and are often slow to react to new technologies.

Also, the potential for political manipulation, character assassination, and fake news is huge. In a world already rife with misinformation campaigns, deepfakes could become a potent tool in the wrong hands.

Yet, as with most technologies, it’s a double-edged sword. The same technology that can create malicious political propaganda can also be used for satire, legitimate artistic expression, or advanced filmmaking techniques. The responsibility lies in how we choose to use it—and how society and law enforcement handle misuse.

Intellectual Property Concerns

Another angle to consider is intellectual property (IP) rights.

If I upload clips of an actor to train an AI model, do I have the legal right to do so? Where does “fair use” come into play, and how does it intersect with a person’s right to control their own image?

We’re seeing early lawsuits that challenge how generative AI companies use training data. For instance, do you remember when Scarlet Johansson sued OpenAI and expressed her anger over the ChatGPT’s chatbot voice that “sounded so eerily similar” to hers.

Scarlet Johansson Is Not Happy With OpenAI Using Her Voice For ChatGPT
Scarlet Johansson isn’t happy over the ChatGPT’s chatbot voice that “sounded so eerily similar” to hers.generativeai.pub

In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.

It’s likely only a matter of time before we see parallel litigation concerning video and face data.

For developers and content creators, these questions can be perplexing. The technology is evolving faster than legal and ethical frameworks can keep up. It’s hard to control how people use this tech, the least we can do for now is that: If we’re training a model on someone’s likeness, we should be very clear about the source material and the intended use of the resulting model. While that won’t solve every legal question, it’s a start in establishing trust and responsibility.

Final Thoughts

After spending countless hours exploring Hunyuan, LoRA, and other AI video generation tools, I can confidently say we’re standing at the precipice of a new era in filmmaking, entertainment, and digital media.

The training process on Replicate is simple and much cheaper than Fal AI, which makes it easy for anyone to try. The results are pretty good too. For example, the model of BlackPink’s Rose really looks like her, showing how accurate the tool can be.

The Keanu Reeves face transfer using the Hunyuan video model is one of the most realistic examples I’ve seen. The face looks very real and blends smoothly with the background. There are a few tiny issues if you look closely, like with the hair, but most people wouldn’t even notice.

What’s exciting is how easy this technology is to use. The idea of bringing back the likeness of long-lost actors or even family members, although a bit intriguing, is honestly fascinating.

As a developer working with AI, I’m looking forward to adding this feature to Flux Labs AI. If you’re a developer working on AI-powered apps, the Hunyuan video model is worth checking out. It’s a tool with lots of potential.

‍

Get your brand or product featured on Jim Monge's audience