/smstreet/media/media_files/2025/12/19/luma-ai-3-2025-12-19-15-19-08.jpeg)
Luma AI, the frontier artificial intelligence company building multimodal AGI, today introduced Ray3 Modify - a next-generation workflow that allows real-life actor performances to be enhanced with AI, enabling creative teams to produce Hollywood-quality performances and scenes. Ray3 Modify allows editing recorded footage in incredible new ways and is a visual effect powerhouse rolled into a single model. Available now on Luma AI’s Dream Machine platform, Ray3 Modify addresses one of the fundamental limitations of early AI video systems: their inability to reliably follow and preserve human performance.
Historically, in AI-generated video, it has been difficult to preserve timing, motion, and emotional intent during acting and scene transformations. Ray3 Modify generates content in direct response to human-led input footage, enabling brands to work with actors to promote products and personalize content while maintaining alignment with the original performance. For filmmakers, this enables directors to film actors in any world, scene, or style.
With Ray3 Modify, the human performer, camera operator, or physical input becomes the source of direction for AI. This enables AI to follow real-world motion, timing, framing, and emotional delivery. By conditioning AI generation on human-led input footage, Ray3 Modify significantly reduces guesswork and enables creators to guide shots closer to their original intent.
Ray3 Modify enables a new class of AI workflows for production, where predictability, continuity, and repeatability are essential. Ray3 Modify preserves original performance while allowing creators to change environments, styling, cinematography, and visual interpretation with intentional, scene-aware fidelity.
“Generative video models are incredibly expressive but also hard to control. Today, we are excited to introduce Ray3 Modify that blends the real-world with the expressivity of AI while giving full control to creatives. This means creative teams can capture performances with a camera and then immediately modify it to be in any location imaginable, change costumes, or even go back and reshoot the scene with AI, without recreating the physical shoot.”, said Amit Jain, CEO and Co-founder of Luma AI.
NEW CAPABILITIES IN RAY3 MODIFY EXTEND RAY3’s LEADERSHIP IN AI PRODUCTION
Ray3 Modify introduces four major advancements designed for real creative pipelines across media, advertising, film VFX, and creative professionals:
Keyframes (Start & End Frames)
Ray3 Modify introduces Start and End Frame control to the video-to-video workflow for the first time. This allows creative teams to guide transitions, control character behavior, and maintain spatial continuity across longer camera movement passes, reveals, and complex scene blocking.
Character Reference
Apply any custom character identity onto an actor’s original performance: a crucial functionality for actor-led projects with AI. For the first time, the feature allows locking the likeness, costume, and identity continuity of a specific character across an entire shot.
Performance Preservation
Ray3 Modify preserves an actor’s original motion, timing, eye line, and emotional delivery as the foundation of the scene, while allowing visual attributes and environments to be intentionally transformed.
Enhanced Modify Video Pipeline
A new high-signal model architecture delivers more reliable adherence to physical motion, composition, and performance. Scenes now change in ways that respect the human-led input footage, enabling intentional edits without destabilizing continuity or identity.
Together, these capabilities establish Ray3 Modify as a next-generation AI video tool purpose-built for hybrid-AI, where creative authority starts with the performer or camera, and the AI extends, interprets, or transforms that direction.
/smstreet/media/agency_attachments/3LWGA69AjH55EG7xRGSA.png)
Follow Us