Introduction
MetaHuman Animator, unveiled at GDC 2023 and launched with Unreal Engine 5.2, enables creators to produce high-fidelity facial animations for MetaHuman characters using only a PC and an iPhone. This Epic Games feature allows anyone to capture an actor’s performance and apply nuanced expressions to a digital human, delivering film-quality results without costly equipment or large teams. An article details its Unreal Engine 5 setup, animation process, technical and creative aspects, comparisons with Faceware and ARKit, best practices, resources, and an FAQ for all skill levels.
What is MetaHuman Animator?
MetaHuman Animator, included in Unreal Engine 5.2’s MetaHuman plugin, captures facial performances with an iPhone or stereo camera, processes them locally on a GPU, and animates MetaHuman characters with detailed expressions, eye movements, and tongue motion in minutes, without manual keyframing. This free tool makes high-fidelity facial animation accessible to all Unreal Engine users, bridging real-world acting to realistic digital characters for indie creators and game developers.

Setting Up MetaHuman Animator in Unreal Engine 5
To animate with MetaHuman Animator, you need:
- Unreal Engine 5.2+ and MetaHuman Plugin: Install Unreal Engine 5.2 or later, enable the MetaHuman plugin from the Unreal Marketplace, and restart Unreal after activation.
- TrueDepth Camera Device: Use an iPhone X (or later) or iPad Pro (3rd Gen+) with Face ID, or a stereo head-mounted camera (HMC) like a helmet-mounted vertical stereo pair for precise capture.
- Powerful PC: Requires a high-end CPU, NVIDIA RTX 3080 (or AMD equivalent) with 10+ GB VRAM, and 64 GB RAM (minimum: Intel i7, RTX 2070, 8 GB VRAM). Supports Windows 10/11 only.
- Live Link Face App: Install this free iOS app to stream facial data to Unreal Engine and record raw footage, connecting via the same network or USB.
- MetaHuman Character: Use a latest-version MetaHuman from MetaHuman Creator via Quixel Bridge or a premade one, ensuring compatibility with engine updates.
To set up Live Link between an iPhone and Unreal for MetaHuman Animator, create a MetaHuman Capture Source asset in Unreal, input the iPhone’s IP, and select “LiveLink Face” as the source. On the iPhone, open Live Link Face, enter the PC’s IP, and confirm the connection (green indicator). In Unreal, add a MetaHuman character, enable Live Link (choosing Live Link Face Head and Body for preview), then record and process performances using the MetaHuman Animator Capture window.
Step-by-Step Guide: Using MetaHuman Animator
Animating a MetaHuman with an actor’s performance involves these key steps:
- Capture Facial Performance: Use an iPhone with TrueDepth via the Live Link Face app or Unreal Editor. Frame the actor’s face to record video and depth data, streaming to Unreal. In-editor, use Take Recorder or MetaHuman Capture Manager. Start with a neutral pose for calibration, then record and stop when done. Works with new or old clips.
- Create MetaHuman Identity (if needed): For better results when the MetaHuman’s face differs from the actor’s, use a MetaHuman Identity asset. Capture front and profile snapshots from the video, then run an Identity Solve to align the MetaHuman’s rig to the actor’s features, improving animation accuracy. Without this, default mapping may reduce precision.
- Process the Performance: In the MetaHuman Animator (MHA) panel, combine the video take, MetaHuman Identity, and target MetaHuman, then click Process. MHA’s machine-learning solver uses 2D video, depth data, and the calibrated model to create a detailed facial animation track in seconds to minutes, replicating blinks, smiles, and jaw movements.
- Review and Tweak (Optional): Check the animation in Unreal’s sequencer or preview. MHA provides smooth, accurate results with synced facial controls (brows, eyes, lips). Adjust specific controls or add layers via the Control Rig for precision. Less cleanup is needed than traditional mocap, though polishing is an option.
- Apply to Production & Combine with Body Animation: Export the facial performance as a Level Sequence or animation asset. For cinematics, use Sequencer with body motion; for games, bake it into the skeleton. Sync face and body via Live Link, mocap, or separate captures, attaching the head/neck bone to the body rig and adjusting head movement to avoid issues.
- Iteration and Additional Takes: Record multiple takes for different characters, process as needed, and select the best. Blend animations for complex scenes, but avoid artifacts when blending facial animations.
MetaHuman Animator streamlines animation by quickly turning performances into animations with a simple record, process, and use workflow, outpacing manual methods in speed and ease for artists. Upcoming sections will detail its technical aspects and provide creative tips for realistic outcomes.

How MetaHuman Animator Works (Technical Breakdown)
MetaHuman Animator (MHA) uses advanced hardware and software for realistic MetaHuman animations:
- 4D Data Capture and Solving: MHA employs Epic’s “4D solver” with video frames, 2D images, 3D depth data (from infrared TrueDepth or stereo cameras), and a MetaHuman model to track facial movements precisely, outperforming sparse blendshape methods.
- MetaHuman Identity and Calibration: A calibrated MetaHuman Identity matching the actor’s neutral face enhances personalized animation, capturing unique nuances. Pre-solving with neutral frames boosts quality; without it, a less tailored generic mapping is used.
- Local GPU Processing: MHA processes video and depth data locally via GPU acceleration and Epic-trained machine learning in Unreal Engine, generating animation in minutes offline. Data stays secure on-device; only MetaHuman creation, not animation, sends scanned data to Epic’s servers.
- Rig and Animation Output: The MetaHuman rig includes control curves for brows, eyelids, lips, jaw, and tongue. MHA produces smooth, detailed animation outputs (e.g., raised eyebrow, eye darts), enhanced by UE5.2’s updated tongue model and mouth controls.
- Audio-Driven Animation: In Unreal Engine 5.5, MHA generates facial animations from audio, predicting movements without a camera. It’s less detailed, missing silent expressions and precise eye movements, but serves as a refinable starting point.
- Limitations and Processing: High-quality, sharp, well-lit input at 60 fps is essential for best results, producing large files (~800 MB/minute). Extreme movements or long takes may require splitting into segments for efficient processing. Stereo HMCs need calibration.
MetaHuman Animator employs advanced computer vision and graphics to transform video and depth input into realistic facial animations using a straightforward pipeline. It leverages a learned facial behavior model to control a detailed facial rig, enabling creators to produce authentic animations effortlessly. Future focus will explore creative applications and best practices for realistic, non-creepy results.
Best Practices for Realistic MetaHuman Animation
To maximize MetaHuman Animator (MHA) for realistic characters and avoid the “uncanny valley,” follow these best practices:
Use a head-mounted camera (HMC) rig for facial capture to allow natural actor movement, blending face and body animation. Professional HMC setups ensure steady camera positioning.
- Capture Face and Body Together: Record both simultaneously with an HMC for natural coordination. If separate, sync body and head timing with the face, or capture the face with minimal rotation and align later, letting the body guide the head.
- High-Quality Footage: Use even lighting, avoid shadows, and ensure sharp focus. On iPhone, set 60fps and “High” quality, framing the full face without cropping. Secure HMC rigs to prevent wobble. Test audio, video, and depth clarity before long takes.
- MetaHuman Identity Solve: Calibrate with the actor’s neutral face and key expressions in MHA, tweaking in MetaHuman Creator to match their likeness and expressions, reducing “slippage” for authenticity.
- Avoid Uncanny Valley: Add subtle eye darts, blinks, breathing, or micro head turns to counter static eyes or symmetry. For tripod shots, keyframe neck or head bones to blend with facial animation.
- Blend Animations: Record full sequences or blend at neutral points (e.g., blinks) for smooth transitions. Retargeting works between MetaHumans but is complex for non-MetaHuman rigs.
- Iterate: Review processed takes with the team or actor. Retake with exaggerated or smaller motions if needed, leveraging quick recording and processing for multiple takes.
- Optimize for Games: Simplify dense keyframes (e.g., bake, reduce, or compress) and test at 30fps if 60fps isn’t needed. Optimize assets like wrinkle textures for real-time, keeping full quality for cinematics.
- Manage Files: Trim large (0.8 GB/min) iPhone TrueDepth recordings in MHA before processing, archive unused takes, and name files clearly (e.g., “John_DialogueTake3”).
To ensure MetaHuman Animator produces believable results and maintains a smooth workflow, follow best practices: treat your MetaHuman like a real actor by providing a strong performance, good lighting, and attention to small details, and the animation will excel.

MetaHuman Animator vs Faceware vs ARKit
How MetaHuman Animator compares to Faceware and ARKit/Live Link Face for facial animation, including their roles in a creator’s workflow:
- MetaHuman Animator (Unreal Engine 5): Offers high-fidelity facial animations with strong MetaHuman integration, capturing detailed offline performances using an iPhone or HMC and PC. It delivers accurate results in minutes via a 4D solve, not real-time, and is free with UE5. Best for MetaHuman characters, it requires extra effort for non-MetaHuman rigs, though animations can be retargeted. Unsuitable for live scenarios like broadcasts.
- Faceware: A versatile, hardware-agnostic paid solution, Faceware tracks facial movements from any camera to animate any rigged character, including MetaHumans. It supports real-time tracking and post-processing, offering more manual control than ARKit. Less plug-and-play than MetaHuman Animator for UE users, it’s flexible for non-UE projects or combined with MHA for live previews and final capture.
- ARKit / Live Link Face (Unreal): Uses iPhone’s TrueDepth sensor for real-time tracking of over 50 blendshapes via Live Link Face. Easy to set up, it’s ideal for live performances, previews, or virtual production, but its quality is lower than MHA, missing subtle details like complex eye movements. Often paired with MHA for initial live animation followed by refinement.
- Workflow Roles: ARKit offers instant feedback, Faceware suits non-MetaHumans or cleanup, and MHA provides top-quality, quick results for MetaHumans. MHA is evolving with audio-driven updates and future real-time support, enhancing its accessibility and innovation.
In summary: ARKit (Live Link Face) provides real-time simplicity with lower fidelity; Faceware delivers professional-grade versatility and control at a higher cost; MetaHuman Animator offers top fidelity for MetaHumans with an easy workflow, requiring short processing time and limited to MetaHuman rigs. Project needs dictate the choice, or a mix can be used. For Unreal Engine digital human projects, MetaHuman Animator is central, with ARKit aiding live previews and Faceware suiting specific cases.
Tutorials and Official Documentation
To master MetaHuman Animator, consult these key resources from Epic Games and the community:
- Official MetaHuman Animator Documentation (Epic Games): The Epic Developer Community provides detailed guides on Facial Performance Capture, Hardware Requirements, and Audio-Driven Animation, offering essential technical steps.
- Epic’s “How to Use MetaHuman Animator” Tutorial (Video): A step-by-step video from Unreal Engine’s education team demonstrates capturing an actor’s performance with an iPhone and applying it to a MetaHuman in UE5, showing workflow and settings.
- Unreal Engine Blog – MetaHuman Animator Announcement: The June 15, 2023, post “Delivering high-quality facial animation in minutes…” gives an overview of capabilities and behind-the-scenes details, ideal for explaining the tool.
- Community Tutorials and Forums: The Epic Developer Community Forums feature user and staff tutorials like “How to Use MetaHuman Animator,” while YouTube creators (JSFILMZ, Matt Workman) offer video guides on applications and troubleshooting, requiring version verification.
- MetaHuman Sample Projects: Epic’s Unreal Marketplace sample and GDC demos (e.g., Hellblade II showcase) provide practical examples of MetaHuman Animator’s capabilities for study and application.
You can troubleshoot character version mismatches or camera calibration and learn advanced techniques like editing control rigs or applying MHA animations in Sequencer using available resources. Epic’s official documentation provides accurate details, while community content offers practical insights from experienced MetaHuman Animator users.

Frequently Asked Questions (FAQ)
- Is MetaHuman Animator a separate product or included with Unreal Engine 5?
MetaHuman Animator is a free feature within Unreal Engine’s MetaHuman plugin (UE 5.2 and later), requiring no additional software beyond UE and hardware like an iPhone. Enable the plugin in UE5 to use it. - What are the requirements to use MetaHuman Animator (hardware and software)?
Unreal Engine’s facial capture requires Unreal Engine 5.2+ on a Windows 10/11 PC with a powerful CPU/GPU (e.g., RTX 3080, 64GB RAM). It uses an iPhone/iPad with a TrueDepth camera (iPhone X+ with Face ID) or a stereo head-mounted camera, and the free Live Link Face iOS app for capture/streaming. Android phones and 2D cameras lack the needed depth data support. - Do I have to use an iPhone, or can I use an Android or a regular camera for facial capture?
MetaHuman Animator requires a TrueDepth sensor, making an iPhone or iPad the simplest option. Android devices lack equivalent front-facing depth sensors and have no official app like Live Link Face. A 2D camera alone doesn’t suffice, but a professional dual-camera stereo rig can work with a custom workflow. As of 2025, there’s no native Android support, so Apple devices are recommended. - Does MetaHuman Animator work in real time (live)?
MetaHuman Animator (MHA) isn’t real-time; it records and processes animations quickly. A live ARKit preview via Live Link Face shows during capture, but final high-fidelity animation is post-processed. Use Live Link Face for live puppeteering. Creators use Live Link for previews, MHA for polished output. - How is MetaHuman Animator different from using Live Link Face (ARKit) or Faceware?
Live Link Face (ARKit) provides real-time animation with ~50 blendshapes, favoring speed. MHA uses video and depth for detailed animation (e.g., wrinkles, tongue) with brief processing. Faceware, a third-party tool, supports any character real-time or offline but needs licensing/setup. MHA, free in Unreal, excels for MetaHumans; ARKit/Faceware suit real-time/non-MetaHuman use. - Can I use MetaHuman Animator on non-MetaHuman characters or custom rigs (for example, a character I made in Blender)?
No, MHA requires MetaHuman characters with a MetaHuman Identity and rig. Custom characters need conversion via “Mesh to MetaHuman” for MHA or retargeting in Unreal, which may be imperfect due to rig differences. MHA is best for MetaHumans; other characters require alternatives. - Does MetaHuman Animator also capture body or head movement, or just facial animation?
MHA captures facial animation and neck-up motion, including head rotation, via tools like Live Link Face or cameras, linking it to the head bone. Full-body motion needs separate methods like motion capture or Unreal’s Control Rig. Body animation drives head/neck; MHA handles facial rig (e.g., jaw, brows), needing coordination for actions like nodding. - What are some best practices to get the most realistic results from MetaHuman Animator?
Record high-quality footage with stable lighting and a head-mounted camera to focus on the actor’s face. Use natural, slightly exaggerated performances, avoiding subtle expressions, and sync facial/body movements via MetaHuman Identity calibration. Add subtle eye movements and tweak stiff areas with the control rig, ensuring the MetaHuman uses MHA’s updated rig for realism. - Can I use pre-recorded videos or audio with MetaHuman Animator, or does it have to be a live capture?
Yes, pre-recorded media works. MHA processes facial footage (e.g., .mov from iPhone) in Unreal without live capture, needing depth data from tools like Live Link Face’s recorder. It supports audio-driven animation and on-set recordings for later processing. Older 2D videos require external tools for depth or ARKit data. - How do I export or use the animations created by MetaHuman Animator?
In Unreal, MetaHuman Animator animations export as a Level Sequence for facial animation in Sequencer (cinematics/rendering) or as an animation asset tied to the MetaHuman’s facial skeletal mesh (Animation Blueprints/gameplay). For tools like Maya or Blender, export the animated skeletal mesh as FBX, though rig controls become baked bone animation. Many keep animations in Unreal for cinematics or real-time use; baking and exporting works for other platforms. Test exports for frame rate or curve adjustments, as Unreal optimizes for in-engine cinematics, games, or VR.

Conclusion
MetaHuman Animator uses an iPhone to capture an actor’s facial nuances and apply them to Unreal Engine 5 MetaHumans in minutes, delivering big-budget studio-quality animation with 4D facial solving and machine learning. This user-friendly tool supports indie and professional game developers, filmmakers, and enthusiasts, producing believable character animation. Results rely on vision, capture quality, lighting, and hardware setup, fostering creative innovation.
Sources and citation
- Epic Games – MetaHuman Animator Announcement Blog: “Delivering high-quality facial animation in minutes, MetaHuman Animator is now available!” (June 15, 2023) – Unreal Engine Blogunrealengine.com
- Epic Games Documentation – MetaHuman Versioning Guide: Introduction of MetaHuman Animator in UE5.2 and related MetaHuman asset updatesdev.epicgames.com
- Epic Games Documentation – Facial Performance Capture Guidelines: Official guidelines for recording footage for MetaHuman Animator (hardware, lighting, etc.)dev.epicgames.com
- Epic Games Documentation – Hardware and Device Requirements: System requirements for MetaHuman plugin and MetaHuman Animator (CPU, GPU, RAM, OS)dev.epicgames.com
- Epic Games Documentation – Audio-Driven Animation for MetaHuman: Explanation of generating facial animation from audio in MetaHuman Animator (UE5.5)dev.epicgames.com
- Epic Developer Community Forum – Tutorial: How to Use MetaHuman Animator in Unreal Engine: Discussion and tutorial video link (Epic Official Tutorial)forums.unrealengine.com
- Medium Article by Deaconline – “Metahumans 101. UE5.3, Live Link Face, and MetaHuman Animator”: Community write-up with step-by-step usage and tipsmedium.com
- Faceware Tech Blog – “Faceware Studio and Epic’s MetaHumans” (Feb 2022): Insights on Faceware vs ARKit and using MetaHumansfacewaretech.com
- Unreal Engine Documentation – Animating with Live Link: Details on using ARKit (Live Link Face app) for real-time facial animation in UE5dev.epicgames.com
- Unreal Engine Documentation – MetaHuman for Unreal Engine: Overview of the MetaHuman plugin, including creating MetaHumans from footage and privacy infodev.epicgames.com
Recommended
- What Is Depth of Field in Blender, and How Do I Set It?
- Blender Restore Curve Segment Length Geometry Nodes Preset: Complete Guide
- How Do I Create a First-Person Camera in Blender?
- What 3D Program Did Arcane Use? An In-Depth Look at the Animation Tools Behind Riot Games’ Hit Series
- The Best Way to Render Hair Without Excessive Noise
- Unlocking Cinematic Power: The Best 5 Blender Camera Add-ons to Elevate Your Workflow
- How to Make Blender Hair Work with Unreal Engine’s Groom System
- How do I lock a camera to an object in Blender?
- What Is Gaussian Splatting? A Complete Guide to This Revolutionary Rendering Technique
- How Do I Set Up Multiple Cameras in Blender?