Tencent's open-source AI video generator, Hunyuan, has just integrated Low-Rank Adaptation (LoRA) support.
Lights, camera… algorithm?
The silver screen is no longer the exclusive domain of flesh-and-blood actors, all thanks to the recent advancements in AI. These days, it’s remarkably easy to generate videos in which actors do or say things they never actually did or to take a clip from a film and replace the actor’s face with someone else’s.
Tencent’s open-source AI video generator, Hunyuan, has just integrated Low-Rank Adaptation (LoRA) support, which means you can now train custom styles, characters, and movements, making your AI videos truly unique and personalized.
Hunyuan was launched in December 2024 and quickly impressed the AI community with its 95.7% visual quality score, outperforming many of its competitors. Now with LORA integration, it’s even more powerful. This free, open-source AI video generator packs the punch of pricey options like OpenAI’s Sora, which, by the way, can cost you as much as $200 per month.
There are three ways to generate a deepfake video of an actor.
Let’s look at text-to-video in practice. Using a model that’s been trained on Keanu Reeves’s photos as John Wick, you could prompt:
Prompt: John Wick, a man with long hair and a beard, wearing a dark suit and tie in a church. He has a serious expression on his face and is holding a gun in his right hand. The scene is dimly lit, creating a tense atmosphere.
This is super cool. It looks like a genuine outtake from the John Wick franchise. Even the people in the background still look so real.
Alternatively, with video-to-video, you could feed an existing movie clip into the same model and ask the AI to replace the lead actor’s face with John Wick’s.
Here’s the result:
Seeing this in action for the first time is shocking in the best possible way. The realism is close to mind-blowing, especially when the AI nails the subtle movements in an actor’s expression. Hair, lighting, shading—it's all getting better with every new version of these models.
I’ve seen a bunch of video-to-video models in the recent months, but this is by far the best in terms of quality.
Downloading Models and Running Workflows
If you’re interested in experimenting with these methods, you can find the video model used in the John Wick demo on CivitAI. However, the workflow is not yet publicly available. You’ll need to piece it together on your own or wait for official documentation.
If you have a powerful Mac, you can attempt to run the workflow with this setup:
In my experience, it’s not too difficult to get everything working if you’re comfortable with basic command-line operations, Docker, or local AI dev environments. Still, the learning curve can be a bit steep if you’re totally new to AI model deployment.
Okay, now let me show you how you can train an AI model with your own video input. We’ll be using the zsxkib/hunyuan-video-lora model from Replicate.
Two important parameters you need to fill in are the destination model and the input video files.
Upon creation, you will be redirected to the training detail page, where you can monitor your training’s progress and download the weights and run the trained model.
In the prompt string, describe the kind of video you want to see. In the example below, the input videos are from BlackPink’s Rose.
Prompt: In the style of RSNG. A woman with blonde hair stands on a balcony at night, framed against a backdrop of city lights. She wears a white crop top and a dark jacket, exuding a confident presence as she gazes directly at the camera
Once all the parameters are set, click on the “Boot + Run” button and wait for the final video to be generated. The 2-second clip below was generated in 1.5 minutes.
It looks awesome! The final result could be a somewhat blurred but eerily realistic shot that captures Rose’s likeness and even some correct shadow casting.
This model costs approximately $0.20 to run on Replicate, or 5 runs per $1, but this varies depending on your inputs. It is also open source, and you can run it on your own computer with Docker.
Better yet, the model is open source, so if you prefer, you can run it locally on your own machine using Docker. For those looking to keep costs down or maintain full control of their pipeline, that’s a huge plus.
Training LoRAs can feel surprisingly straightforward. Before LoRA was popularized, you had to train huge segments of a model, which required time, money, and powerful hardware.
With LoRA, you only train a small matrix (or a set of low-rank matrices) inserted into the larger model. This means drastically fewer parameters, drastically shorter training times, and drastically reduced computational overhead.
Even though you can train these smaller “adaptation heads” quickly, the results often look almost as good as a full model training session. It’s this combination of efficiency and quality that gets me excited about where the industry is heading.
We’re at a point now where a single developer, or even a motivated hobbyist, can create specialized and high-quality AI that used to be the domain of big tech companies.
Moreover, LoRAs give developers more creative control. If I'm training a model to replicate a particular aesthetic style, like a painterly look or a vintage film appearance or capturing the subtle facial structure and expressions of a certain celebrity, LoRA-based training consistently delivers.
It’s efficient, it’s modular, and it’s quite cheap to do.
Now, let’s talk about the ethical implications of this deepfake tech.
A famous example that predates the current AI wave is Peter Cushing being resurrected through CGI in 2016’s Rogue One: A Star Wars Story.
Peter Cushing originally portrayed Grand Moff Tarkin in Star Wars: A New Hope (1977) but passed away in 1994. Since Rogue One leads directly into A New Hope, Tarkin’s character was considered integral to the storyline, prompting producers to digitally bring Cushing back.
This decision ignited a debate: Is it ethically acceptable to feature a deceased actor in a new film, using digital technology, without their explicit consent?
Some argue that it preserves continuity and pays homage to beloved characters. Others contend that it infringes on an actor’s right to control their image and legacy.
If an actor never consented to appear in a certain story or context while alive, does the studio have the moral right to digitally resurrect them?
These issues extend beyond the grave. What about actors who are still alive but don’t want to participate in a specific project? Historically, lookalikes have been used in cinema or satire to depict famous figures.
Today, AI can generate a near-perfect replica of someone’s face and voice, effectively removing any barrier to forced participation. Contracts about image use, intellectual property, and moral rights will all need major overhauls to keep up with this reality.
In many jurisdictions, there are laws about “post-mortem publicity rights,” which allow heirs to control how a deceased individual’s likeness is used commercially. But laws vary widely across countries and are often slow to react to new technologies.
Also, the potential for political manipulation, character assassination, and fake news is huge. In a world already rife with misinformation campaigns, deepfakes could become a potent tool in the wrong hands.
Yet, as with most technologies, it’s a double-edged sword. The same technology that can create malicious political propaganda can also be used for satire, legitimate artistic expression, or advanced filmmaking techniques. The responsibility lies in how we choose to use it—and how society and law enforcement handle misuse.
Another angle to consider is intellectual property (IP) rights.
If I upload clips of an actor to train an AI model, do I have the legal right to do so? Where does “fair use” come into play, and how does it intersect with a person’s right to control their own image?
We’re seeing early lawsuits that challenge how generative AI companies use training data. For instance, do you remember when Scarlet Johansson sued OpenAI and expressed her anger over the ChatGPT’s chatbot voice that “sounded so eerily similar” to hers.
In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.
It’s likely only a matter of time before we see parallel litigation concerning video and face data.
For developers and content creators, these questions can be perplexing. The technology is evolving faster than legal and ethical frameworks can keep up. It’s hard to control how people use this tech, the least we can do for now is that: If we’re training a model on someone’s likeness, we should be very clear about the source material and the intended use of the resulting model. While that won’t solve every legal question, it’s a start in establishing trust and responsibility.
After spending countless hours exploring Hunyuan, LoRA, and other AI video generation tools, I can confidently say we’re standing at the precipice of a new era in filmmaking, entertainment, and digital media.
The training process on Replicate is simple and much cheaper than Fal AI, which makes it easy for anyone to try. The results are pretty good too. For example, the model of BlackPink’s Rose really looks like her, showing how accurate the tool can be.
The Keanu Reeves face transfer using the Hunyuan video model is one of the most realistic examples I’ve seen. The face looks very real and blends smoothly with the background. There are a few tiny issues if you look closely, like with the hair, but most people wouldn’t even notice.
What’s exciting is how easy this technology is to use. The idea of bringing back the likeness of long-lost actors or even family members, although a bit intriguing, is honestly fascinating.
As a developer working with AI, I’m looking forward to adding this feature to Flux Labs AI. If you’re a developer working on AI-powered apps, the Hunyuan video model is worth checking out. It’s a tool with lots of potential.
‍
Software engineer, writer, solopreneur