Character.AI, a leading platform for chatting and roleplaying with AI-generated characters, unveiled its forthcoming video generation model, AvatarFX, on Tuesday. Available in closed beta, the model animates the platform’s characters in a variety of styles and voices, from human-like characters to 2D animal cartoons.
AvatarFX distinguishes itself from competitors like OpenAI’s Sora because it isn’t solely a text-to-video generator. Users can also generate videos from preexisting images, allowing users to animate photos of real people.
It’s immediately evident how this kind of tech could be leveraged for abuse — users could upload photos of celebrities or people they know in real life and create realistic-looking videos in which they do or say something incriminating. The technology to create convincing deepfakes already exists, but incorporating it into popular consumer products like Character.AI only exacerbates the potential for it to be used irresponsibly.
Character.AI told TechCrunch that it will apply watermarks to videos generated with AvatarFX to make it clearer that the footage isn’t real. The company added that its AI will block the generation of videos of minors, and that images of real people get filtered through the AI to change the subject into a less recognizable person. The AI is also trained to recognize images of high-profile celebrities and politicians to limit the potential for abuse.
Since AvatarFX is not widely available yet, there is no way to verify how well these safeguards work.
Character.AI is already facing issues with safety on its platform. Parents have filed lawsuits against the company, alleging that its chatbots encouraged their children to self-harm, to kill themselves, or to kill their parents.
In one case, a 14-year-old boy died by suicide after he reportedly developed an obsessive relationship with an AI bot on Character.AI based on a “Game of Thrones” character. Shortly before his death, he’d opened up to the AI about having thoughts of suicide, and the AI encouraged him to follow through on the act, according to court filings.
Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025
Netflix, Box, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, Vinod Khosla — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before doors open to save up to $444.
Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025
Netflix, Box, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, Vinod Khosla — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss a chance to learn from the top voices in tech. Grab your ticket before doors open to save up to $444.
These are extreme examples, but they go to show how people can be emotionally manipulated by AI chatbots through text messages alone. With the incorporation of video, the relationships that people have with these characters could feel even more realistic.
Character.AI has responded to the allegations against it by building parental controls and additional safeguards, but as with any app, controls are only effective when they’re actually used. Oftentimes, kids use tech in ways that their parents don’t know about.
Updated, 4/23/25, 9:45 AM ET with comment from Character.AI