Metahuman Facial Motion with Faceware: Complete Workflow for Realistic Face Animation in Unreal Engine 5

MetaHuman characters in Unreal Engine 5 (UE5) can deliver stunningly realistic facial animations when driven by Faceware – a leading facial motion capture solution. In this comprehensive guide, we’ll explore the metahuman facial motion Faceware workflow from start to finish. We’ll cover what Faceware is and how it integrates with MetaHumans, step-by-step setup for real-time animation in UE5, offline retargeting for post-production, best practices for accuracy, and more. Whether you’re a beginner or an advanced user, this article will help you achieve believable facial performances on your MetaHuman using Faceware’s technology.

What is the Faceware workflow for Metahuman facial animation?

Faceware’s workflow animates MetaHuman faces by tracking an actor’s facial performance, supporting real-time (live puppeteering) or offline (post-production) modes. Faceware software analyzes facial movements from camera feeds or videos, translating them into animation data to drive the MetaHuman’s facial rig. Data can stream live into Unreal Engine 5 (UE5) via a plugin or be recorded for offline retargeting.

  • Faceware Studio (Live): Real-time tracking app using machine learning to capture facial motion from any camera and stream to UE5.
  • Faceware Analyzer & Retargeter (Offline): Analyzer tracks facial movements from recorded video; Retargeter applies them to the MetaHuman rig, often via Maya, for editing and export.
  • Unreal Engine Integration: The Faceware Live Link plugin receives tracking data (live or imported) to drive the MetaHuman’s facial rig in UE5, recordable in Sequencer for refinement.

Actors perform expressions, tracked live by Faceware Studio for instant UE5 animation or offline by Analyzer, retargeted in Maya, and imported into Unreal, yielding realistic facial animations.

Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5
Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5

Which Faceware products are compatible with Metahuman characters?

  • Faceware Studio: Real-time facial tracking software streaming animation data to UE5 via the Live Link plugin for MetaHumans.
  • Faceware Analyzer & Retargeter: Offline tools; Analyzer tracks video facial movements, Retargeter maps data to MetaHuman rigs, often using Maya, then imports to UE5.
  • Faceware Live Link Plugin: Free plugin bridging Faceware Studio to UE5, enabling MetaHumans to receive live tracking data.
  • Glassbox Live Client: Alternative plugin connecting Faceware Studio to UE5, compatible with MetaHumans, offering similar functionality.
  • Faceware Hardware (Optional): Head-mounted cameras (e.g., Mark IV HMC) enhance input quality for MetaHuman animations, though standard webcams suffice.

Faceware Studio with Live Link is ideal for real-time; Analyzer/Retargeter suit offline polishing.

Can I use Faceware Studio to drive real-time Metahuman facial motion?

Yes, Faceware Studio drives real-time MetaHuman facial animation by capturing an actor’s performance via a camera and streaming it to UE5 through the Faceware Live Link plugin. Using neural network-based markerless tracking, Studio generates facial animation parameters (e.g., jaw open, brows raised) sent to UE5, where the MetaHuman mirrors expressions instantly. This is ideal for virtual production, live events, or interactive applications, offering responsive facial capture for professional needs.

How do I set up Faceware for use with Metahuman in Unreal Engine 5?

  • Install Faceware Studio: Download and install Faceware Studio, connect a camera, and ensure it’s recognized.
  • Get Faceware Live Link Plugin: Install the free plugin from the Unreal Marketplace, compatible with your UE5 version.
  • Enable Plugin in UE5: Activate the plugin in your project and restart the editor.
  • Add MetaHuman: Import your MetaHuman via Quixel Bridge and place it in the level or edit its blueprint.
  • Apply Faceware Animation Blueprint: Assign the plugin’s ABP_Metahuman_Faceware_LiveLink_Face to the MetaHuman’s face mesh to connect it to Faceware data.
  • Launch Faceware Studio and Calibrate: Select camera input, calibrate a neutral pose for accurate tracking.
  • Start Live Link Streaming: In Faceware Studio, stream to UE5 (auto-detected on localhost or via IP).
  • Configure Live Link in UE5: In the Live Link window, add Faceware as a source, ensuring an active stream.
  • Assign Live Link Source to MetaHuman: Set the MetaHuman’s blueprint to use the Faceware Live Link subject, typically via the provided AnimBP.
  • Test Setup: Play or simulate in UE5; the MetaHuman should mirror the performer’s expressions. Fine-tune camera framing or lighting as needed.

Subsequent sessions require only launching Studio, Unreal, and streaming.

Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5
Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5

How do I connect Faceware Live to Unreal Engine for Metahuman animation?

  • Start Faceware Studio: Load a profile, ensure tracking detects facial movements, and calibrate a neutral pose.
  • Enable Streaming: In Studio’s Streaming panel, select Unreal Live Link and click “Stream to Client” to broadcast data (localhost or specified IP).
  • Add Faceware Source in UE5: In the Live Link window, add Faceware as a source; it should auto-detect if streaming, showing an active stream.
  • Verify Data Flow: Check Live Link Subject details for updating animation curves as you move your face.
  • Bind to MetaHuman: Ensure the MetaHuman’s AnimBP (e.g., Faceware’s provided blueprint) binds to the Faceware subject, matching the stream name.
  • Play or Preview: The MetaHuman’s face should move with the performer’s expressions; troubleshoot subject assignment or AnimBP if static.

The plugin maps Faceware’s output to MetaHuman controls for seamless real-time animation.

How do I record and apply Faceware facial animation to Metahumans?

Once you have Faceware driving your MetaHuman in Unreal, you’ll likely want to record the performance so you can play it back or edit it. There are a couple of ways to record and apply the facial animation:

  1. Using Unreal’s Take Recorder (for Live Performances): Unreal Engine’s Take Recorder is the standard way to capture Live Link facial animation in real time. Here’s how:
    • In UE5, open the Take Recorder (Window > Take Recorder). Add a Source for your MetaHuman performer. Typically, you’d add the MetaHuman’s Actor from the scene as the source to record. Make sure to include its animation data (especially the face AnimBP or the Live Link data).
    • Hit the Record button in Take Recorder, then perform your scene (Faceware Studio should be streaming the face at this time). The MetaHuman will act it out, and Take Recorder will capture the animation.
    • Stop the recording. Unreal will save the recorded facial animation as an animation asset or sequence (depending on settings). You can find it in the Content Browser (often under a Takes folder as an Animation Sequence or Level Sequence with embedded animation tracks).
    • This recorded data can now be applied to the MetaHuman. For example, if it’s an Animation Sequence asset tied to the MetaHuman’s skeleton, you can drag it onto the MetaHuman to replay it. If it’s a Level Sequence, you can scrub the timeline to see the animation. Essentially, you’ve captured the Faceware-driven performance and can reuse it.
    • Using Take Recorder, you can also simultaneously record audio (from a microphone) and body motion if needed, syncing them with the face (more on audio sync later). Epic’s official course demonstrates how to record and manage facial performance using Take Recorder, which is exactly this process.
  2. Recording in Faceware Studio (offline) and Importing: If you are not doing it live, you could instead record the facial performance within Faceware Studio or simply save the video of the performance:
    • Faceware Studio itself can record tracking data, but typically you’d use it to stream live. However, you can input a pre-recorded video and let it play through to generate animation.
    • A more manual approach: record the actor’s video (with audio) first. Then either use Faceware Studio in “playback” mode or use Faceware Analyzer to process that video. Once you have the tracked facial animation, you would then bring it into Unreal.
    • To import into Unreal, one route is via Faceware Retargeter in Maya: apply the tracking to a MetaHuman face in Maya, export an FBX of the animated MetaHuman, and import that FBX into UE5. The imported animation can be assigned to your MetaHuman character.
    • Alternatively, Faceware Studio might be able to export an FBX of the animation (with a generic rig), but you’d still need to retarget it to MetaHuman in UE. The most straightforward offline method remains using Maya + Retargeter as shown in Faceware/Epic tutorials.
  3. Applying the Animation to MetaHumans: Once you have a recorded animation (either from Take Recorder or imported FBX):
    • If it’s an Animation Sequence asset on the MetaHuman skeleton, you can apply it by setting your MetaHuman’s AnimClass to that sequence or by placing the MetaHuman in a Level Sequence and assigning that animation to play.
    • If it’s a Level Sequence track (from Take Recorder), you can open that sequence (it will have a track for the facial animation, often under the MetaHuman’s Face component or Control Rig). You can then play the sequence to see the animation.
    • The animation can be edited or refined (see next question on editing in Sequencer). The key is that it’s now baked into keyframes which you can manipulate or simply use as is.
Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5
Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5

In summary, recording Faceware facial animation is typically done via Take Recorder in UE5 for a live workflow. It captures the MetaHuman’s facial motion which you can then treat like any other animation asset. For offline workflows, record and track the face outside, then import the solved animation. Either way, the result is that your MetaHuman has an animation you can play back, scrub, and adjust, just like a recorded mocap take or hand-keyed animation.

What are the steps to retarget Faceware data to a Metahuman rig?

  • Record Video: Obtain high-quality facial performance video with good lighting and framing.
  • Track with Analyzer: Load video in Faceware Analyzer, track movements, and output a .face file with animation curves.
  • Prepare MetaHuman in Maya: Import the MetaHuman’s rig into Maya using Epic’s compatibility plug-in or Quixel Bridge export.
  • Use Faceware Retargeter: In Maya, load the .face file in Retargeter, map Faceware’s facial channels to MetaHuman controls (auto or manual mapping).
  • Apply Data: Drive the MetaHuman rig with the tracked performance in Maya’s viewport.
  • Refine Animation: Adjust curves or tweak mappings in Retargeter/Maya to smooth noise or enhance expressions.
  • Export Animation: Export an FBX with baked animation, including blendshapes, from Maya.
  • Import into UE5: Import the FBX to the MetaHuman’s skeleton, creating an Animation Sequence.
  • Apply in UE5: Assign the animation to the MetaHuman in Sequencer and review for timing or minor adjustments.

This offline process via Maya ensures high-fidelity, editable animations.

Can Faceware be used for both live streaming and offline animation?

Absolutely – Faceware supports both live streaming and offline animation workflows for MetaHumans, making it a versatile solution:

  • Live Streaming (Real-Time): Faceware Studio and the Faceware Live Link plugin enable real-time streaming of facial motion data into Unreal Engine. This setup is ideal for live performances, interactive applications, or on-set visualization, allowing immediate results. The MetaHuman’s face animates in sync with the actor’s live performance, with the option to record on the fly. This real-time pipeline ensures quick iterations and dynamic interactions.
  • Offline Animation (Recorded/Post-Process): Faceware Analyzer and Retargeter, or Faceware Studio with recorded video, allow processing of pre-recorded footage for detailed animation. Footage can be filmed on location, tracked later, and refined through multiple passes for precision. The animation is retargeted to the MetaHuman rig, often in Maya, before importing into Unreal for rendering or tweaking. This method suits cinematic sequences requiring meticulous control.
  • Hybrid Approach: Workflows can combine live and offline methods, such as using live puppeteering for initial performance blocking and refining with offline processing for quality. Faceware Studio can load video files to simulate live streaming, bridging offline and live workflows. This flexibility supports varied production needs. Both real-time and offline approaches are viable, with training courses available for each.

Faceware’s dual capability allows creators to choose between speed for interactive contexts or detailed control for polished cinematics, ensuring versatility in MetaHuman animation workflows.

Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5
Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5

What settings should I use for the most accurate Metahuman facial motion?

Achieving the most accurate and realistic facial motion with Faceware and MetaHumans requires careful attention to your capture setup and software settings. Here are some best practices and recommended settings:

  • Camera Resolution: Use a 720p or 1080p camera to capture detailed facial features, especially around eyes and mouth. Higher resolutions improve tracking accuracy without significant benefits beyond 1080p. This ensures clear data for Faceware’s analysis. Avoid excessive resolution to prevent processing overhead.
  • Frame Rate: Capture at 60 fps for optimal accuracy, reducing motion blur and providing more data for fast movements. Standard rates like 24 or 30 fps are supported, but 60 fps is ideal. Higher frame rates offer diminishing returns and may complicate synchronization. Ensure the frame rate matches Faceware Studio settings.
  • Lighting: Use even, diffused lighting to avoid shadows or hotspots that can disrupt tracking. Ensure consistent illumination, especially with head-mounted cameras, to prevent glare on skin. Proper lighting enhances tracking reliability. Adjust to maintain clarity across the face.
  • Camera Position & Framing: Position the camera at eye level or slightly below, filling the frame with the face from forehead to chin. Ensure all facial features remain visible, even during wide mouth movements. A centered, stable camera angle improves tracking accuracy. Slight lower angles can enhance mouth capture.
  • Stability: Minimize camera shake or wobble, especially with head-mounted setups, for stable tracking. Moderate actor movements to prevent tracking slips during fast head turns. Select the correct “Stationary vs Headcam” setting in Faceware Studio. Switching modes may improve results in some cases.
  • Faceware Studio Settings: Choose the appropriate model (Stationary or Headcam) and match the video frame rate to avoid interpolation issues. Calibrate with a relaxed neutral pose after exaggerated expressions for accuracy. Calibration establishes a reliable baseline. Adjust settings to align with capture conditions.
  • Animation Tuning: Use the Animation Tuning panel to adjust facial shape strengths, like jaw movement, for precise output. Apply light smoothing to reduce jitter without losing nuance. Fine-tune sparingly to match the actor’s unique facial movements. Monitor real-time tracked shapes for accuracy.
  • MetaHuman Setup: Ensure the MetaHuman’s default pose is neutral to avoid rig conflicts. Use the standard face rig for reliable results, as modifications may reduce accuracy. Verify blueprint settings align with Faceware input. Maintain consistency to ensure seamless animation integration.
  • Consistent Performance Environment: Record in a controlled environment with a consistent backdrop to avoid background interference. Prevent facial obstructions like hair or heavy makeup that could disrupt tracking. Avoid glasses if possible to reduce reflections. Maintain a clean, natural face for optimal results.
  • Testing and Iteration: Conduct test recordings with varied expressions to assess tracking quality. Adjust lighting or camera settings if expressions are inaccurate. Iterative testing refines setup for better results. Short tests help identify and fix issues early.

High-quality input through optimized camera, lighting, and framing settings ensures accurate MetaHuman facial motion, enhancing realism in performance capture.

Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5

How do I sync audio with Faceware-driven Metahuman facial animation?

Synchronizing audio with your Faceware-driven facial animation is important, especially if your MetaHuman is speaking or performing to dialogue. Here are some tips and methods to ensure audio stays in sync:

  • Record Audio and Face Together (Live Workflow): Use Take Recorder in Unreal to capture audio and facial animation simultaneously. Add an audio source, like a microphone, to record dialogue during performance. The captured take ensures inherent sync between lip movements and audio. Playback maintains alignment as recorded.
  • Manual Audio Sync (Offline or Separate Recording): Import separately recorded audio into Unreal’s Level Sequencer and align it with the facial animation track. Slide the audio to match lip movements, using visual cues like claps for precision. Manual adjustments ensure accurate sync. Verify alignment through playback.
  • Timecode (Professional Sets): Use timecode to sync audio and Faceware video in professional setups. Devices like the Mark IV headcam support LTC timecode for alignment. Timecode ensures precise matching in post-production or Unreal imports. This method suits complex shoots.
  • Lip-Sync Adjustments: Adjust for slight sync discrepancies by nudging audio or animation a frame. Humans expect lips to move slightly before sound, so fine-tune accordingly. Account for project frame rate differences if needed. Small tweaks enhance perceived sync.
  • Viseme Timing: Encourage clear actor articulation to ensure Faceware’s video-based tracking captures accurate lip shapes. Exaggerated mouth movements improve viseme detection. Avoid mumbled speech to maintain sync with audio phonemes. Clear diction enhances tracking reliability.
  • Playback in Engine: Verify sync by playing the sequence in Unreal with V-Sync off to avoid display lag. Check for drift due to frame rate mismatches, though rare in short sequences. Use camera cuts for accurate review. Ensure consistent playback settings.

Capturing audio and facial animation together ensures inherent sync, while manual alignment or timecode provides precision for separate recordings, resulting in convincing MetaHuman dialogue delivery.

Does Faceware support blendshape and ARKit-style control with Metahuman?

Yes, Faceware’s system ultimately drives the same kind of controls that ARKit (Apple’s face tracking) uses for MetaHumans – primarily blendshapes (morph targets) and some bone rotations. In other words, Faceware fully supports an ARKit-style facial rig like the MetaHuman, even though the underlying tracking technology is different.

Here’s how it breaks down:

  • MetaHuman Facial Rig: MetaHumans use a facial rig aligned with ARKit’s 52 blendshape standard, including expressions like “eyeBlinkLeft” and “jawOpen.” This rig supports Epic’s Live Link Face for direct ARKit input. Faceware integrates seamlessly with this structure. The rig includes additional controls for enhanced expressiveness.
  • Faceware’s Output: Faceware tracks facial movements and maps them to MetaHuman’s blendshapes via the Live Link plugin or animation blueprint. Outputs like “Smile Left” translate to corresponding MetaHuman controls. This ensures compatibility with ARKit-style rigs. Adjustments allow fine-tuning for specific expressions.
  • ARKit vs Faceware Differences: ARKit relies on depth sensors, while Faceware uses 2D video and machine learning. Faceware offers more control over animation curves, unlike ARKit’s black-box approach. Both drive similar facial curves for MetaHumans. Faceware’s flexibility supports custom rig adjustments.
  • Blendshapes and Bones: Faceware drives MetaHuman’s blendshapes and bone-based controls, like jaw and eye rotations. The plugin handles both seamlessly, ensuring full facial animation. Eye and jaw movements integrate naturally. This comprehensive support enhances animation realism.
  • Other Rigs: Faceware can drive custom rigs beyond ARKit standards, mapping outputs to unique blendshapes or joints. For MetaHumans, the plugin is optimized for the standard rig. This versatility supports varied project needs. MetaHuman integration remains straightforward and effective.
Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5
Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5

Faceware’s integration ensures effective control of MetaHuman’s ARKit-style blendshapes and bones, offering flexibility and precision comparable to or exceeding ARKit, with seamless setup for animators.

What are the best practices for facial rigging when using Faceware with Metahuman?

When using Faceware with MetaHumans, the “facial rigging” is largely taken care of by the MetaHuman rig itself, which is highly advanced. However, there are a few best practices to consider to ensure the rig works optimally with Faceware’s input, especially if you customize anything or use your own characters:

  • Use the Default MetaHuman Face Rig: Stick to Epic’s standard MetaHuman rig for compatibility with Faceware’s Live Link plugin. The rig includes 52 ARKit blendshapes for comprehensive expression control. Modifying it may disrupt expected control structures. This ensures seamless animation integration.
  • Rig Mapping (if Custom Rig): For custom rigs, create a detailed facial rig with blendshapes mirroring ARKit’s range. Map Faceware outputs to these controls using Faceware Retargeter in Maya. Ensure coverage for all major expressions. This maintains animation fidelity across rigs.
  • Neutral Pose Alignment: Align the rig’s neutral pose with the actor’s relaxed expression during Faceware calibration. MetaHuman’s default is a relaxed face, avoiding mismatches like sculpted smiles. A true neutral enhances tracking accuracy. Calibration should reflect natural rest states.
  • Blendshape Ranges: Verify MetaHuman’s blendshape ranges accommodate extreme expressions, like wide jaw openings. Adjust limits if Faceware input exceeds rig capabilities, though rare with MetaHumans. Ensure sufficient range for dynamic performances. This prevents animation clipping or unnatural limits.
  • Facial Hierarchy: Maintain simple control hierarchies in the MetaHuman rig to avoid retargeting issues. The rig’s blendshapes and bone transforms (jaw, eyes) are straightforward. Ensure clear mappings for Faceware inputs. Avoid complex dependencies that could confuse animation.
  • Testing with Key Expressions: Test the rig with Faceware on key expressions like blinks and smiles to identify mapping issues. Fix missing or weak blendshapes promptly. Early testing ensures robust performance. Adjust rig or mappings based on results.
  • Don’t Double-Drive: Prevent conflicts by ensuring Faceware is the sole driver during performance capture. Avoid simultaneous Control Rig inputs that could override Faceware data. Use additive tracks for blending if needed. This maintains animation consistency.
  • Use Correct Units and Scales: Maintain MetaHuman’s centimeter scale when exporting to Maya for Retargeter. Mismatched scales can affect bone-driven animations like head rotations. Consistent units ensure accurate retargeting. Verify scale settings in workflows.
  • Facial Rig Clean-up: Disable unnecessary constraints or controllers during Faceware retargeting to avoid conflicts. For MetaHumans, turn off the Face Control Board when using live input. Ensure a clean rig setup. This streamlines animation application.

Sticking to the MetaHuman rig and ensuring proper neutral alignment and mapping simplifies Faceware integration, with minimal rigging adjustments needed for optimal results.

Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5
Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5

How do I troubleshoot tracking issues in the Faceware Metahuman workflow?

Even with a good setup, you may encounter tracking issues or suboptimal results. Here are common problems and troubleshooting tips for the Faceware + MetaHuman workflow:

  • Facial Movements Not Tracking (or Losing Track): Ensure the actor stays in frame and avoids extreme head turns or occlusions. Recalibrate the neutral pose using an exaggerated-then-relaxed expression for clarity. Tracking failures often stem from visibility issues. Recalibration can reset the tracking baseline.
  • Jittery or Noisy Animation: Apply light smoothing in Faceware Studio’s Motion Effects to reduce jitter from video noise. Ensure clean, low-grain footage with proper lighting to minimize tremors. Adjust makeup to reduce skin reflections. Filter curves in Unreal or Maya if needed.
  • Inaccurate Mouth Shapes (Lip Sync Off): Encourage exaggerated actor articulation for clear tracking. Use a slightly lower camera angle and adequate mouth lighting to capture shapes. Manual tweaks may be needed for teeth or tongue. Verify setup to enhance viseme detection.
  • One Side of Face Not Capturing Well: Center the camera to ensure both face sides are equally visible, tucking away hair or props. Boost specific shapes in Faceware’s tuning panel if needed. Ensure clear visibility of eyebrows and mouth corners. Adjust setup for symmetry.
  • Calibration Issues: Recalibrate under consistent lighting with a truly neutral pose to align tracking points. Capture a neutral frame for manual adjustments in Analyzer if issues persist. Ensure no expression during calibration. Lighting consistency is critical for accuracy.
  • Head Movement vs Body Animation Conflicts: Decide whether body mocap or Faceware drives head rotation to avoid conflicts. Use Faceware for facial expressions and body mocap for neck orientation. Adjust blueprints to isolate controls. This prevents animation jitter or overlap.
  • Live Link Connection Issues: Check firewall settings and network alignment for Faceware Studio and UE5. Restart Live Link or Studio to reconnect. Ensure unique subject names and single Studio instance. Test connectivity with a fresh MetaHuman asset.
  • MetaHuman not Responding (Animation Blueprint issues): Verify the Faceware anim blueprint is assigned to the MetaHuman’s face skeletal mesh. Ensure no conflicting Control Rig overrides. Test with a fresh MetaHuman to isolate issues. Correct blueprint settings restore functionality.
  • Solver Confusion (Lip Reflection example): Use matte makeup or adjust lighting to eliminate glossy lip reflections that confuse the solver. Avoid heavy makeup or facepaint that alters tracking. Ensure a clean, natural face. Lighting adjustments prevent misinterpretation.
  • Tracking “Drift” Over Time: Maintain consistent actor positioning to prevent drift. Recalibrate mid-take or break long takes into parts if needed. Correct drift in Analyzer by realigning tracker frames. Consistent positioning enhances long-term tracking stability.
  • Use the Community & Resources: Leverage Faceware’s forums and Discord for solutions to common issues. Tips from users like Gabriella (Feeding_Wolves) on Unreal forums cover calibration and lighting fixes. Community insights resolve many problems. Engage with discussions for support.

Most tracking issues stem from input quality and calibration, and focusing on these areas ensures reliable MetaHuman animation with Faceware.

Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5
Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5

Can I combine Faceware facial capture with body mocap in UE5?

Combining Faceware facial capture with body motion capture in Unreal Engine 5 is feasible and common for full performance capture. Faceware handles facial animations, while systems like Vicon or Xsens manage body movements, requiring careful integration to avoid conflicts, particularly with head and neck controls.

Here’s how to effectively combine them:

  • Use Separate Mocap Systems for Face and Body: Faceware captures facial animations, while body mocap systems track skeletal movements. Actors wear body tracking suits and Faceware’s head-mounted camera. Data streams remain independent. This modular approach ensures specialized capture for each component.
  • Live Link Multiple Sources: Unreal’s Live Link supports simultaneous inputs from Faceware for the face and body mocap for the skeleton. Assign each source to the respective MetaHuman component in the Live Link panel. The face uses a separate anim blueprint. This setup enables cohesive character animation.
  • Faceware LiveLink Body Blueprint: Use Faceware’s body blueprint for head and neck if relying solely on Faceware. With body mocap, apply only Faceware’s facial animation, letting body mocap control the neck. This prevents redundant head control. Adjust blueprints for compatibility.
  • Disable Dual Control of Head/Neck: Choose either body mocap or Faceware to drive head rotation to avoid conflicts. Typically, body mocap handles neck and gross head motion, with Faceware focusing on facial expressions. Configure Faceware to ignore head bones or use its head rotation for subtle movements if equipped with IMU data.
  • Merging in Sequencer (Alternative): Record face and body separately and combine in Sequencer using time-synced performances. Apply facial animation as an additive track over body animation. MetaHuman’s modular rig simplifies this process. Ensure precise alignment with timecode or audio cues.
  • Alignment of Performances: Simultaneous capture aligns performances naturally. For separate recordings, use audio or consistent actor performance to match timings. Precise alignment ensures unified animation. Reference audio guides synchronization.
  • Performance Actor Setup: Equip actors with a mocap suit and Faceware camera for live capture. Body data feeds into systems like Xsens Live Link, while Faceware data drives the face. This setup is standard in production. Ensure equipment compatibility.
  • Testing the Combo: Test with simple motions like nodding to verify smooth integration without jitter. Adjust head control settings if needed. Early tests identify conflicts. Refine setup for seamless performance.
  • Example Setup: Assign “Body” role to mocap data and “Face” role to Faceware in Live Link Subject Roles. MetaHuman’s blueprint accepts separate inputs for face and body. This structure streamlines data application. Ensure correct source connections.

Combining Faceware and body mocap in UE5 enables full MetaHuman performance capture, with careful management of head/neck interactions ensuring unified, realistic character animation.

What’s the difference between Faceware and Live Link Face for Metahuman animation?

Both Faceware and Live Link Face (ARKit) can drive MetaHuman facial animations, but they differ in hardware requirements, tracking technology, and workflow flexibility, catering to varied production needs.

Below is a summary of how approaches differ:

FeatureFaceware (Faceware Studio)Live Link Face (ARKit on iPhone)
Hardware RequiredA PC-compatible camera (webcam, DSLR, or Faceware headcam). No specialized sensor; markerless video tracking. Optionally a head-mounted camera for actor freedom.An Apple iPhone or iPad with a TrueDepth front camera (Face ID capable device). This provides infrared depth sensing. Requires iOS device on same network as UE5.
Cost & LicensingProprietary Software (Faceware Studio). Indie licenses available (Faceware Studio Indie is paid, but affordable; free trial available). No cost per use of plugin, but software license needed.Free App (Live Link Face) – available on the App Store. Unreal integration is built-in and free. The main cost is the iPhone hardware (assuming one already has an iPhone, additional cost is zero).
Tracking TechnologyMachine learning on 2D video. Faceware analyzes the video feed to infer 3D facial motion. No markers needed. It can use any video source, even footage. Excels with good lighting/camera.IR Depth + RGB (ARKit). The device projects infrared dots to get a depth map of the face plus an RGB image. ARKit software then outputs 52 blendshape coefficients. Very consistent tracking within the range of the sensor.
Setup & ConvenienceRequires a camera setup and running Faceware Studio on a PC. Somewhat technical setup with plugin. More flexible – camera can be high-end for quality, or even use pre-recorded video. Good for studio environments.Very quick setup: install app on phone, connect via WiFi to Unreal. Intended for ease of use. However, requires holding the phone or mounting it in front of the actor. Best for relatively close-up capture (arms-length from face).
Real-time PerformanceExcellent real-time performance on a capable PC. Low latency streaming via Live Link. Tuning options to balance latency vs. smoothing. Used in live broadcasts and VP. The quality depends on camera and PC (processing HD video).Also low latency (ARKit is optimized for iPhone hardware). The phone streams data at up to 60fps to UE. Can be slightly simpler since all processing is on the phone. However, WiFi network quality can affect latency; a stable network or USB tether recommended for consistent performance.
Tracking FidelityHigh fidelity, especially with a quality camera – captures nuanced movements. Faceware’s neural network can handle subtle brow raises, eye darts, etc. Can sometimes struggle if video quality is poor or if head turns too profile. Offers more manual control over each tracking point’s contribution.High fidelity within ARKit’s scope. ARKit is very good at standard expressions and is extremely robust (thanks to IR depth, it rarely loses track, even in low light). It has fixed blendshape outputs, which generally cover most expressions. Some very nuanced motions might get quantized into those 52 shapes. ARKit can track tongue out, cheek puff, etc., which Faceware will only capture if visible.
Flexibility & UsageFlexible: Can use any video (including offline video from years ago, different angles if calibrated). Can track non-human faces to a degree (though MetaHuman is human-focused). Good for studios that might integrate with custom pipelines (Maya, MotionBuilder). Also works with multiple engines (Unity, etc.). Faceware can be used in post or live interchangeably.Specific: Only works with an Apple device’s camera in real time. Not designed for offline use (though you can record on the phone and play back as a workaround, but it’s not a robust offline pipeline). ARKit is tied to the iPhone hardware – great if you have it, but not usable if you need to process a random video file or use a high-end camera feed.
Combine with Body MocapFaceware is often used in pro mocap alongside optical or inertial body systems. The Faceware headcam can be part of a helmet that also has reflectors for motion capture. It integrates well in that sense. The data merging is left to the user (as discussed, via Live Link).Live Link Face can also be combined with body mocap (many do this) – the actor holds or helmet-mounts an iPhone. But the phone might get in the way for some mocap suits/helmets. Some solutions combine them successfully (there are helmet mounts for iPhones now). Data merging is similar via Live Link.
Post-ProcessingFaceware output can be refined either in Faceware Studio (with tuning, or by re-processing video) or afterward in animation software. It’s not one-click – animators often go in and adjust curves. This is similar to any mocap: you get a strong base, then tweak.ARKit (Live Link Face) is more of a black box – it gives you the curves, and you typically refine them in Unreal or elsewhere if needed. There’s less you can adjust in ARKit itself (besides some app settings for smoothing). So both ultimately need the animator’s eye for polish, but Faceware allows intervention during solving if desired.
Notable LimitationsDependent on camera quality and lighting; if those are bad, tracking suffers. Also a paid product for higher-end use. Needs a beefy machine for HD tracking to avoid frame drops.Requires specific hardware (modern iPhone/iPad). Field of view of TrueDepth camera is limited (you can’t go too far from the device). Multi-person capture requires multiple iPhones (one per actor).
When to Use WhichUse Faceware if: you don’t have an iPhone or prefer not to use one, you want to use high-quality cameras, you need an offline solution for existing videos, or you require more custom control over the facial solve. It’s favored in VFX when repurposing video references or when an actor can’t wear an iPhone on head.Use Live Link Face if: you have an iPhone ready and want a quick, budget-friendly setup for real-time face capture. It’s great for indies or quick prototyping, and the quality is impressive given the minimal setup. Also, if you plan to use MetaHuman Animator (new Unreal feature), note that it currently works with an iPhone capture.

Both systems can yield excellent results on MetaHumans, with Faceware offering greater control for high-end productions and Live Link Face providing a convenient, cost-effective option for rapid workflows.

Are there plugins or tools to automate Faceware to Metahuman retargeting?

  • Faceware Live Link Plugin: Free on Unreal Marketplace, automates real-time Faceware Studio data integration with MetaHumans using pre-configured blueprints.
  • Glassbox Live Client: Commercial alternative for real-time streaming, similar to Faceware’s plugin, used in some pipelines.
  • Unreal’s ARKit Plugin: Non-Faceware, automates iPhone-based capture, showing Unreal’s automation capabilities.
  • Faceware Retargeter (Maya): Semi-automates offline retargeting by mapping data to MetaHuman rigs, saving profiles for reuse.
  • Community Tools: Rare blueprint assets (e.g., Reddit’s mapping package) supplement official plugins for custom automation.
  • MetaHuman Animator: Non-Faceware, automates facial animation in UE5.2+ from iPhone data, a contextual alternative.
  • MotionBuilder/Unity Plugins: Enable Faceware data streaming to Unreal, supporting broader pipeline integration.

The Faceware Live Link plugin and Retargeter in Maya are primary for automation.

Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5
Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5

How do I import and clean up Faceware data in Unreal Engine?

Importing:

  • FBX Import: Export FBX from Maya’s Retargeter, import to UE5 targeting the MetaHuman skeleton, enabling morph targets for an Animation Sequence.
  • Take Recorder Assets: Captures create Animation Sequences or Level Sequences in Content Browser, editable in Animation Editor or Sequencer.
  • Live Link Recorded Sequence: Less common, produces Sequencer tracks with facial curves, editable similarly.

Cleaning Up:

  • Animation Curve Editor: Adjust keyframes in Animation Editor for bones/morph targets, smoothing jitter or scaling expressions.
  • Face Control Board: Use Sequencer’s Face Control Board to key poses, fixing expressions like lip shapes via sliders.
  • Additive Layers: Overlay tweaks in Sequencer (e.g., blink timing) non-destructively in additive mode.
  • Smoothing Filters: Apply NoiseReduction filters in Animation Editor to reduce jitter, duplicating assets to preserve originals.
  • Trimming/Timing: Adjust clip timing in Sequencer, syncing dialogue or inserting pauses.
  • Reimport: Re-export updated FBX from Maya if extensive fixes are needed, using “Replace Existing” to maintain references.
  • Context Check: Review animations in the final scene with viewport or movie render previews to assess under lighting/camera.

These tools refine Faceware data into production-ready animations in UE5.

Is Faceware facial animation good enough for cinematic-quality Metahuman characters?

Faceware delivers cinematic-quality MetaHuman animations with skilled polishing, used by studios like Scanline VFX for films and games:

  • Industry Proven: Trusted for high-end projects, meeting cinematic standards.
  • Capture Quality: Tracks subtle nuances (e.g., micro-smirks) with high-res cameras, enhanced by deep learning for natural lip-sync.
  • Comparison: Matches hand animation or marker-based mocap detail with less complexity, as seen in games like It Takes Two.
  • Cleanup Needed: Requires minor curve adjustments for hero shots, standard for mocap.
  • MetaHuman Blendshapes: Realistic deformations enhance Faceware’s output for lifelike results.
  • Testimonials: VFX artists praise Faceware-MetaHuman integration for cinematic applications.
  • Limitations: Ultra-close 4K shots may need corrective shapes for wrinkle details, manageable with polish.
  • Acting Focus: Captures emotional nuances, supporting storytelling with strong performances.

Proper capture and cleanup ensure Faceware-driven MetaHumans rival expensive mocap systems.

Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5
Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5

Can I edit Faceware-driven facial motion inside Unreal Engine’s Sequencer?

Yes, Faceware-driven facial animations can be edited in Sequencer:

  • Recorded Animation: Take Recorder creates Level Sequences with editable facial tracks (Animation or Control Rig).
  • Convert to Control Rig: Bake sequences to the MetaHuman’s facial Control Rig for intuitive keyframing.
  • Face Control Board: Adjust expressions via sliders (e.g., smiles, blinks) for quick fixes.
  • Curve Editing: Modify blendshape curves (e.g., “mouthSmile_L”) in Animation Curve Editor for precise tweaks.
  • Layering Edits: Add additive tracks for non-destructive tweaks like blink timing.
  • Splitting Gestures: Slice tracks to adjust dialogue timing or insert pauses.
  • Blending Animations: Overlap takes or hand-keyed poses for smooth transitions.
  • Previewing: Real-time viewport updates show edits under final conditions.
  • Keyframe Management: Adjust, delete, or copy keyframes for pacing or consistency.
  • Blendshapes vs. Rig: Use Face Control Board for ease or curves for precision, with Control Rig simplifying edits.

Sequencer enables efficient polishing of Faceware animations in-engine.

Where can I find tutorials or training on the Faceware and Metahuman workflow?

  • Unreal Online Learning: Free course, “MetaHuman Workflows with Faceware Studio,” covers setup, Live Link, and animation capture.
  • Faceware Analyzer/Retargeter Course: Epic’s course on offline pipeline, tracking video in Analyzer, retargeting in Maya, and UE5 import.
  • Faceware YouTube: Tutorials and webinars, including “MetaHuman Facial Animation” with pro tips on Studio setup and calibration.
  • Unreal Documentation: Guides on Live Link and Faceware plugin setup, plus Faceware’s Knowledge Base articles.
  • Community Tutorials: YouTube videos like “Using Faceware to animate MetaHumans” (15 mins) and Pixel Prof’s setup tips.
  • Faceware Knowledge Base: Articles and PDF manuals on Live Link setup and configuration.
  • Forums/Discord: Unreal forums and Faceware’s Discord MetaHuman channel for community Q&A.
  • Sample Projects: Epic’s sample projects with pre-configured blueprints for hands-on learning.
  • Blender Tutorials: Niche guides on exporting Faceware animations to MetaHumans via Blender.

Official and community resources provide comprehensive training for the Faceware-MetaHuman workflow.

Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5

FAQ Questions and Answers

  1. Do I need to wear markers or a special suit for Faceware facial capture?
    No – Faceware’s technology is markerless. You don’t need any dots on your face. You simply need a camera capturing your face. Faceware analyzes the video feed directly with computer vision and machine learning. (For body capture, a separate mocap suit would be needed, but for face, just a camera and Faceware software do the job.)
  2. What kind of camera should I use with Faceware for best results?
    Use the highest quality camera you can. Ideally a 1080p webcam or better (DSLR or mirrorless camera, or Faceware’s headcam if available) that can shoot at 60fps. The camera should have good low-light performance or you should ensure good lighting. Many people successfully use a Logitech Brio or a Sony A-series camera via capture card. The key is a clear, well-lit view of the actor’s face. If you don’t have a dedicated camera, a recent smartphone filming you (with the footage fed into Faceware Studio) can also work.
  3. Can I use an iPhone with Faceware (instead of using ARKit/Live Link Face)?
    Yes, but not via the ARKit app. You can use an iPhone as a video camera for Faceware by streaming its camera feed to your PC (or by mounting it and capturing the footage). For example, you could use apps like EpocCam or NDI Camera to send the iPhone’s camera to your computer for Faceware Studio to capture. Some DIY creators mount an iPhone as a headcam and use OBS or NDI to get 60fps video into Faceware Studio. However, using the iPhone’s ARKit via Live Link Face is a separate method (that bypasses Faceware entirely). So, an iPhone can serve as a camera for Faceware, but Faceware will treat it like a normal video source, ignoring the depth sensor.
  4. How much does Faceware cost? Is there an indie license or free version?
    Faceware Studio (the real-time software) is a paid product, but they do offer indie licensing options which are more affordable than their pro licenses. As of recent info, Faceware had an Indie annual license that is cheaper for small studios or individuals. There is also a free trial of Faceware Studio you can use to test it out. The Faceware Live Link Unreal plugin is free to download. Faceware Analyzer/Retargeter (for offline) are typically part of their professional suite – more expensive and usually used by studios. They occasionally have promotional offers (like free for students or limited-time trials). Always check Faceware’s official site for the latest pricing details.
  5. Does Faceware Studio track eye movement and blinking accurately?
    Yes, Faceware tracks eyes and blinks. It can capture eye blink speed, frequency, and even some nuance of eyelid movement (like squinting). It also tracks eye gaze to an extent – by analyzing the iris position in the video, it will output eye rotation values so your MetaHuman’s eyes move. However, extreme side glances might be less accurate if the camera angle doesn’t clearly show the whites of the eyes. Overall, blinks and basic eye movements are well-handled. For very precise eye darts (small quick movements), you might need to fine-tune or occasionally clean up, but the system does provide data for eye direction and blink states.
  6. Can Faceware capture tongue movements or do I need to animate those by hand?
    Faceware primarily tracks external facial features – so it captures jaw movement, lip shapes, etc. It does not explicitly track the tongue position inside the mouth. If the actor sticks out their tongue, Faceware might register it as an extreme mouth open or some anomaly, but there isn’t a dedicated “tongue” output. ARKit (iPhone) has a “tongue out” blendshape it detects, but MetaHumans currently don’t have a fully articulated tongue anyway (they have a tongue mesh but limited animation on it). So, if your character needs a specific tongue animation (like licking lips or sticking tongue out), you would likely animate that manually in Unreal or via blendshape after the Faceware capture.
  7. What’s the difference between using Faceware Studio’s live stream and using Faceware’s Analyzer/Retargeter?
    Faceware Studio live stream is real-time – you get immediate results in Unreal as the actor performs. Analyzer/Retargeter is offline – you first record video, then later process it. The difference lies in workflow and control. Live is fast and interactive; offline is slower but allows more tweaking. Analyzer/Retargeter might yield a bit more precise result since you can manually adjust tracking in Analyzer and perfect the mapping in Retargeter. However, the core tech that captures the face is similar. Many choose the live route for convenience and only use Analyzer/Retargeter if they need that extra level of hand-holding the data or are working with previously recorded footage.
  8. Do I still need animators to polish the MetaHuman facial animation from Faceware, or is it final out of the box?
    Faceware captures the core of a performance, including timing and major expressions, simplifying the animator’s task to minor tweaks rather than full creation. Raw Faceware data is often sufficient for blocking, NPC animations, or real-time events, requiring no further edits. For cinematic hero close-ups, animators polish keyframes to sharpen expressions, adjust eyelines, or perfect lip-sync, achieving top-tier quality. This 5-10% hand-tuning is standard practice, even with high-quality mocap, to reach Pixar/ILM-level perfection.
  9. Can I use Faceware data on non-MetaHuman characters (like a stylized character or an animal)?
    Faceware can drive any rig with mapped facial controls, including stylized human characters using custom blueprints or Maya’s Retargeter to map outputs to blendshapes. Cartoon characters work by linking expressions, and animals or creatures with humanoid face structures can be creatively mapped, though results vary. Faceware’s tracking model, trained on human faces, expects a typical facial arrangement, so it struggles with non-humanoid faces like a dog’s snout. For human-like characters, Faceware data applies with proper mapping, with MetaHumans having pre-made mappings.
  10. Does using Faceware require an internet connection or cloud processing?
    Faceware Studio operates locally on your PC or Mac, processing facial tracking in real-time without an internet connection or cloud data transfer. Network connectivity is only needed for streaming between machines, such as from Faceware Studio to Unreal, on a local network. Licensing may require an initial internet check, but operation remains local, ideal for secure or offline environments. Ensure your PC can handle real-time processing.
Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5
Metahuman facial motion with faceware: complete workflow for realistic face animation in unreal engine 5

Conclusion

Faceware enables lifelike MetaHuman facial animations in Unreal Engine 5 by streaming live performances or using offline tools for refined results. The workflow integrates Faceware’s facial motion capture with Epic’s digital humans, requiring proper setup of camera, calibration, and UE5 blueprint configuration. Faceware Studio and Unreal’s plugin drive MetaHuman faces in real-time using a camera, while Analyzer/Retargeter tools allow post-production polishing.

This accessible pipeline suits live and offline needs, delivering film-quality results for projects from indie to cinematic. Best practices in camera settings, lighting, and animation cleanup ensure accurate, nuanced facial animations. MetaHumans achieve lifelike expressions through Faceware’s capture of subtle human performance details like eye movements and smiles. The workflow supports real-time direction of MetaHumans like real actors, translating human emotion into digital characters. The Faceware–MetaHuman pipeline is flexible, evolving, and scales for emotionally engaging animations, enabling creators to produce technically and emotionally compelling results with practice using these industry-proven tools.

Sources and Citations

Recommended

Table of Contents

PixelHair

3D Hair Assets

yelzkizi PixelHair Realistic female 3d character curly dreads 4c hair in Blender using Blender hair particle system
PixelHair ready-made faded waves 3D hairstyle in Blender using Blender hair particle system
PixelHair pre-made Nardo Wick Afro Fade Taper in Blender using Blender hair particle system
PixelHair ready-made Long Dreads Bun 3D hairstyle in Blender using Blender hair particle system
PixelHair Realistic 3d character afro dreads fade taper 4c hair in Blender using Blender hair particle system
PixelHair ready-made 3D Beard of Khalid in Blender
PixelHair pre-made Chris Brown inspired curly afro 3D hairstyle in Blender using Blender hair particle system
PixelHair Realistic Killmonger from Black Panther Dreads fade 4c hair in Blender using Blender hair particle system
PixelHair ready-made Vintage Bob Afro 3D hairstyle in Blender using Blender hair particle system
PixelHair ready-made full Chris Brown 3D goatee in Blender using Blender hair particle system
PixelHair ready-made Neymar Mohawk style fade hairstyle in Blender using Blender hair particle system
yelzkizi PixelHair Realistic female 3d character curly afro 4c big bun hair with 2 curly strands in Blender using Blender hair particle system
PixelHair pre-made Curly Afro in Blender using Blender hair particle system
PixelHair ready-made Top short dreads fade 3D hairstyle in Blender using Blender hair particle system
PixelHair pre-made dreads / finger curls hairsty;e in Blender using Blender hair particle system
yelzkizi PixelHair Realistic female 3d character 4 twist braids 4c afro bun hair with hair clip in Blender using Blender hair particle system
PixelHair pre-made female 3d character Curly  Mohawk Afro in Blender using Blender hair particle system
yelzkizi PixelHair Realistic female 3d character Pigtail dreads 4c big bun hair in Blender using Blender hair particle system
PixelHair ready-made 3D hairstyle of Halle Bailey Bun Dreads in Blender
PixelHair ready-made iconic 3D Drake braids hairstyle in Blender using hair particle system
yelzkizi PixelHair Realistic female 3d character curly afro 4c big bun hair with scarf in Blender using Blender hair particle system
PixelHair ready-made Jcole dreads 3D hairstyle in Blender using hair particle system
PixelHair pre-made Afro Fade Taper in Blender using Blender hair particle system
PixelHair ready-made 3D fade dreads in a bun Hairstyle  in Blender
PixelHair ready-made 3D hairstyle of Lil uzi vert dreads in Blender
PixelHair Realistic female 3d character curly afro 4c hair in Blender using Blender hair particle system
PixelHair ready-made full 3D beard in Blender using Blender hair particle system
PixelHair ready-made Braids pigtail double bun 3D hairstyle in Blender using Blender hair particle system
PixelHair pre-made Lil Baby Dreads Fade Taper in Blender using Blender hair particle system
PixelHair ready-made 3D full beard with magic moustache in Blender using Blender hair particle system
PixelHair ready-made curly afro fade 3D hairstyle in Blender using hair particle system
PixelHair Realistic 3d character clean shaved patchy beard in Blender using Blender hair particle system
PixelHair ready-made short 3D beard in Blender using Blender hair particle system
PixelHair pre-made Drake Double Braids Fade Taper in Blender using Blender hair particle system
PixelHair pre-made Drake Braids Fade Taper in Blender using Blender hair particle system
PixelHair ready-made 3D hairstyle of Big Sean  Spiral Braids in Blender with hair particle system
PixelHair ready-made dreads pigtail hairstyle in Blender using Blender hair particle system
PixelHair pre-made Omarion Braided Dreads Fade Taper in Blender using Blender hair particle system
PixelHair ready-made Rhino from loveliveserve style Mohawk fade / Taper 3D hairstyle in Blender using Blender hair particle system
PixelHair Realistic 3d character full beard in Blender using Blender hair particle system
PixelHair ready-made 3D Dreads curly pigtail bun Hairstyle in Blender
PixelHair Realistic female 3d character bob afro 4c hair in Blender using Blender hair particle system
PixelHair pre-made Burna Boy Dreads Fade Taper in Blender using Blender hair particle system
PixelHair ready-made iconic Juice Wrld dreads 3D hairstyle in Blender using hair particle system
PixelHair ready-made short 3D beard in Blender using Blender hair particle system
PixelHair Realistic female 3d character curly afro 4c big bun hair in Blender using Blender hair particle system
PixelHair ready-made Drake full 3D beard in Blender using Blender hair particle system
PixelHair ready-made 3D hairstyle of Doja Cat Afro Curls in Blender
PixelHair Realistic 3d character curly afro taper 4c hair in Blender using Blender hair particle system
PixelHair ready-made Afro fade 3D hairstyle in Blender using Blender hair particle system
PixelHair pre-made Drake Braids Fade Taper in Blender using Blender hair particle system
PixelHair ready-made 3D Lil Pump dreads hairstyle in Blender using hair particle system
PixelHair ready-made Braids Bun 3D hairstyle in Blender using Blender hair particle system
PixelHair pre-made weeknd afro hairsty;e in Blender using Blender hair particle system
PixelHair ready-made full weeknd 3D moustache stubble beard in Blender using Blender hair particle system
PixelHair ready-made Afro fade 3D hairstyle in Blender using Blender hair particle system
PixelHair pre-made female 3d character Curly braided Afro in Blender using Blender hair particle system
PixelHair ready-made 3D Rihanna braids hairstyle in Blender using hair particle system
PixelHair Realistic 3d character bob mohawk Dreads taper 4c hair in Blender using Blender hair particle system
PixelHair Realistic 3d character bob afro  taper 4c hair in Blender using Blender hair particle system
PixelHair ready-made Pop smoke braids 3D hairstyle in Blender using Blender hair particle system
PixelHair pre-made Chadwick Boseman Mohawk Afro Fade Taper in Blender using Blender hair particle system
PixelHair Realistic female 3d character curly bangs afro 4c hair in Blender using Blender hair particle system
PixelHair ready-made Kobe Inspired Afro 3D hairstyle in Blender using Blender hair particle system
PixelHair ready-made 3D hairstyle of Ski Mask the Slump god Mohawk dreads in Blender
PixelHair Realistic r Dreads 4c hair in Blender using Blender hair particle system
PixelHair Realistic female 3d charactermohawk knots 4c hair in Blender using Blender hair particle system
PixelHair ready-made Omarion dreads Knots 3D hairstyle in Blender using hair particle system
PixelHair pre-made Odel beckham jr Curly Afro Fade Taper in Blender using Blender hair particle system
PixelHair pre-made Ken Carson Fade Taper in Blender using Blender hair particle system
PixelHair ready-made 3D hairstyle of Nipsey Hussle Beard in Blender
PixelHair ready-made goatee in Blender using Blender hair particle system
PixelHair ready-made 3D hairstyle of Travis scott braids in Blender
PixelHair ready-made 3D hairstyle of Nipsey Hussle Braids in Blender
PixelHair ready-made spiked afro 3D hairstyle in Blender using hair particle system
PixelHair ready-made 3D full big beard with in Blender using Blender hair particle system
PixelHair ready-made 3D KSI fade dreads hairstyle in Blender using hair particle system
PixelHair Realistic female 3d character curly afro 4c ponytail bun hair in Blender using Blender hair particle system
yelzkizi PixelHair Realistic female 3d character 4 braids knot 4c afro bun hair in Blender using Blender hair particle system
yelzkizi PixelHair Realistic female 3d character full dreads 4c hair in Blender using Blender hair particle system
PixelHair pre-made The weeknd Afro 3D hairstyle in Blender using Blender hair particle system
PixelHair Realistic female 3d character pigtail dreads 4c hair in Blender using Blender hair particle system
PixelHair ready-made 3D hairstyle of Halle Bailey dreads knots in Blender with hair particle system
PixelHair ready-made iconic Lil Yatchy braids 3D hairstyle in Blender using hair particle system
PixelHair ready-made Polo G dreads 3D hairstyle in Blender using hair particle system
PixelHair ready-made chrome heart cross braids 3D hairstyle in Blender using hair particle system
PixelHair ready-made Scarlxrd dreads hairstyle in Blender using Blender hair particle system
PixelHair ready-made Rema dreads 3D hairstyle in Blender using Blender hair particle system
PixelHair Realistic 3d character afro fade taper 4c hair in Blender using Blender hair particle system
PixelHair ready-made 3D hairstyle of Kendrick Lamar braids in Blender
PixelHair ready-made 3D Dreads hairstyle in Blender
PixelHair pre-made The weeknd Dreads 3D hairstyle in Blender using Blender hair particle system
PixelHair ready-made top woven dreads fade 3D hairstyle in Blender using Blender hair particle system
PixelHair ready-made 3D hairstyle of XXXtentacion Dreads in Blender
PixelHair ready-made iconic xxxtentacion black and blonde dreads 3D hairstyle in Blender using hair particle system
PixelHair ready-made Afro fade 3D hairstyle in Blender using Blender hair particle system
PixelHair ready-made full  weeknd 3D moustache stubble beard in Blender using Blender hair particle system
PixelHair ready-made 3D hairstyle of Big Sean Afro Fade in Blender
PixelHair ready-made 3D full stubble beard with in Blender using Blender hair particle system
PixelHair ready-made full 3D beard in Blender using Blender hair particle system