Mesh to Metahuman is a powerful feature in Unreal Engine 5’s Metahuman plugin that allows users to convert a custom 3D head mesh into a fully rigged, photorealistic digital character quickly and efficiently. The process begins by importing a scanned or modeled head mesh, where the system uses automated landmark tracking to align it with Epic Games’ Metahuman face template, pairing it with a pre-designed body.
This data is then processed in the cloud, matching the custom mesh to the closest face in the Metahuman database, resulting in a high-quality digital double equipped with advanced rigging and shaders. Introduced in 2022, this tool simplifies the creation of lifelike characters for games and films, making it accessible even to those without extensive rigging experience.
Can I create an AAA-quality digital double using Mesh to Metahuman?
Yes, Mesh to Metahuman can absolutely produce AAA-quality digital doubles when you start with a high-quality input mesh and apply proper finishing techniques to enhance the result. The tool leverages Epic’s cutting-edge character technology, including realistic skin shaders, detailed eye shaders, and a sophisticated facial rig, to deliver photorealistic outcomes suitable for top-tier productions. For instance, the Lightfall short film showcased its potential by turning a real actor’s scan into a lifelike digital double used in a high-end cinematic project. However, achieving true AAA quality often requires additional steps like custom texturing and detailed hair work to meet the highest industry standards.

What are the requirements for using Mesh to Metahuman with a scanned head?
To use Mesh to Metahuman successfully with a scanned head, the mesh must meet specific criteria set by Epic Games to ensure a smooth and accurate conversion process. These requirements cover aspects like file format, topology, eye visibility, expression, texture, geometry detail, and the exclusion of unnecessary parts. Below, each requirement is detailed in a four-line paragraph as per your instructions:
- File format: The mesh needs to be exported in either FBX or OBJ format, which are both supported by Unreal Engine 5 for importing static or skeletal meshes into the Metahuman pipeline. These formats ensure compatibility with the plugin’s import system, allowing the cloud solver to process the mesh without hitches or errors. FBX is versatile for simpler meshes, while OBJ handles denser geometry more efficiently, making them ideal choices for different workflows. Using the correct format is critical to avoid import failures and ensure the conversion starts off on the right foot.
- Topology: The mesh must consist of a single, continuous head surface without including separate eyeball or teeth geometry, as Metahuman provides those elements itself. This requirement keeps the focus solely on the facial structure, enabling the system to map the custom mesh onto the Metahuman template accurately and efficiently. A clean, unified topology prevents confusion during the landmark tracking phase, which could otherwise lead to distorted or incomplete results. Excluding extra elements like eyes simplifies the process and ensures a seamless integration with the preset components.
- Eyes open: The scan must feature open eyes with visible sclera, as this allows the automated landmark tracking to correctly identify and align key facial features during conversion. Closed eyes can confuse the solver, leading to misaligned eyes or facial distortions that ruin the final Metahuman’s appearance and functionality. Visible sclera provides clear reference points, ensuring the eyes are positioned naturally within the template’s structure for realistic animation later. This step is non-negotiable for achieving an accurate and usable digital double.
- Neutral expression: A neutral, relaxed facial expression with a closed mouth is essential for the mesh to align properly with the Metahuman template during the conversion process. Extreme expressions like smiles or frowns can skew the solver’s ability to fit the template, resulting in an inaccurate representation of the subject’s face in the final output. A neutral pose provides a stable baseline, allowing the system to map features like the jawline and cheeks without distortion or error. This ensures the digital double starts with a clean slate for further animation and customization.
- Texture or material: Including a textured mesh with an albedo map or basic materials that highlight facial areas greatly improves the accuracy of landmark detection in the conversion process. These visual cues help the system distinguish critical features like the eyes, nose, and lips, leading to a more precise fit with the Metahuman template. Without texture, the solver might struggle to identify boundaries, potentially causing alignment issues or a less detailed result. A simple material setup can make a significant difference in the quality of the final digital double.
- Geometry detail and size: The mesh should have fewer than 200,000 vertices to keep the conversion process efficient, with OBJ being the preferred format for high-polygon counts to avoid slow FBX imports. This polycount strikes a balance between retaining facial detail and ensuring the cloud solver can process the mesh quickly without overloading the system. Too many vertices can bog down the import and conversion stages, while too few might lose critical shapes needed for a realistic outcome. Keeping within this limit optimizes both performance and quality for a smooth workflow.
- No extra parts: The mesh must exclude extraneous geometry like shoulders, hair, or clothing, focusing only on the head and a small portion of the neck to match the Metahuman body seamlessly. Including these elements can disrupt the landmark tracking and confuse the solver, leading to errors or an incomplete conversion that doesn’t align with the preset body. Removing them ensures the system concentrates on the facial structure alone, producing a cleaner and more accurate digital double. This step is key to avoiding unnecessary complications during the process.
You’ll also need a Windows system and a Quixel Bridge login to submit the mesh for cloud processing, tying the workflow to Epic’s ecosystem.

Can I use photogrammetry or 3D scans for AAA digital doubles in UE5?
Absolutely, photogrammetry and 3D scans are excellent methods for creating AAA-quality digital doubles in Unreal Engine 5 using Mesh to Metahuman, thanks to their ability to capture real-world details with stunning accuracy. Software like Agisoft Metashape or mobile apps like Polycam can generate highly detailed head meshes from photographs or scans, providing a strong starting point for the conversion process. These techniques excel at preserving unique traits like skin texture and facial proportions, as seen in Epic’s Lightfall short film, where a scanned actor became a lifelike digital double. After some cleanup, these meshes integrate seamlessly with Metahuman’s rigging system, making them a go-to choice for top-tier character creation.
How do I prepare a high-resolution mesh for Metahuman conversion?
Preparing a high-resolution mesh for Metahuman conversion requires a series of steps to ensure it’s optimized for Unreal Engine 5’s cloud-based system. Below are the steps, each expanded into a four-line paragraph:
- Start with a neutral, detailed head model: Begin with a head captured in a neutral expression with open eyes, using photogrammetry or sculpting to ensure all facial features are well-defined and clear. This detailed foundation is crucial because the solver relies on accurate data to align the Metahuman template properly with the custom mesh. A high-quality starting point ensures the digital double reflects the subject’s true appearance without losing key characteristics. Any lack of detail here could compromise the entire conversion process downstream.
- Clean up scanning artifacts: Remove stray geometry like floating bits, hair strands, or incomplete sections, and fill holes in areas like the chin or scalp to create a solid, watertight mesh. This cleanup prevents errors during conversion, as artifacts can mislead the landmark tracking and result in a flawed digital double. Tools like Blender or ZBrush make this process straightforward, smoothing out noise while preserving the face’s core structure. A clean mesh is essential for the solver to focus on the intended geometry without distractions.
- Ensure proper head geometry boundaries: Trim the mesh to include just the head and a short neck segment, cutting off shoulders or torso to align with Metahuman’s preset body attachment system. This boundary ensures a smooth transition between the custom head and the standard body, avoiding awkward seams or mismatches in the final character. Keeping the geometry focused on the head simplifies the conversion and improves accuracy during template fitting. Proper boundaries are a small but critical detail for a polished outcome.
- Neutralize textures and lighting: Apply an evenly lit albedo map or basic materials to the mesh to clearly differentiate facial features, aiding the solver in pinpointing landmarks accurately. Uneven lighting or shadows can obscure details, causing the system to misplace features like the eyes or mouth in the final Metahuman. A neutral texture setup enhances detection reliability, ensuring the digital double’s face aligns closely with the original scan. This step bridges the gap between raw geometry and a visually coherent input.
- Decimate or retopologize to a manageable polycount: Reduce the mesh’s polygon count to under 200,000 using tools like Blender’s Decimate modifier or ZBrush’s Decimation Master, while keeping the facial shape intact. This optimization speeds up the cloud processing and prevents import issues, as overly dense meshes can overwhelm the system or slow it down significantly. A manageable polycount maintains efficiency without sacrificing the details needed for a realistic conversion. It’s a balancing act that ensures both performance and quality are prioritized.
- Align the head orientation: Position the head facing forward and centered at the origin in your 3D software, making it easy to import and align within Unreal Engine’s Neutral Pose setup. Proper orientation avoids manual adjustments later, streamlining the workflow and ensuring the mesh fits the Metahuman template without unnecessary tweaking. Misalignment can lead to skewed results, so this step locks in accuracy from the start. It’s a simple adjustment with a big impact on the conversion’s success.
- Export in the correct format: Save the mesh as an OBJ or FBX file at a real-world scale (around 25 cm tall) to match the proportions of Metahuman bodies and ensure compatibility with Unreal Engine. OBJ is better for high-poly meshes due to faster imports, while FBX suits simpler setups, giving flexibility based on your mesh’s complexity. Correct scaling prevents size mismatches with the preset body, maintaining a natural look in the final character. This final export step ties all the preparation together for a seamless handoff to the plugin.
These steps create a mesh that’s primed for a high-quality Metahuman conversion, minimizing errors and maximizing fidelity.

How do I clean and retopologize a mesh for Mesh to Metahuman import?
Cleaning and retopologizing a mesh for Metahuman import refines it to meet the tool’s standards while preserving the subject’s likeness. Below are the steps, each in a four-line paragraph:
- Remove unwanted details: Strip away non-facial elements like hair, shoulders, or stray polygons from the mesh, leaving only the head and a bit of neck for the conversion process. This focus on the face ensures the solver isn’t distracted by irrelevant geometry, which could throw off landmark tracking and ruin the result. Tools like Blender’s sculpting brushes or ZBrush’s clipping tools make it easy to isolate the essential parts quickly. A clean slate here sets the stage for an accurate digital double without extra baggage.
- Close holes and smooth errors: Fill in gaps under the chin or on the scalp and smooth out noisy, uneven areas using features like Blender’s Fill tool or ZBrush’s Dynamesh for a watertight surface. Holes or rough patches can disrupt the solver’s ability to map the Metahuman template, leading to incomplete or distorted facial features in the output. Smoothing ensures the mesh flows naturally, mimicking real skin without jagged artifacts that could affect animation later. This step turns a messy scan into a polished input ready for processing.
- Simplify the mesh (optional but recommended): Lower the polygon count to under 200,000 using decimation in Blender or remeshing in ZBrush, keeping the face’s silhouette and key features intact for efficiency. Simplifying the mesh speeds up the cloud conversion process and avoids import slowdowns, which can occur with overly detailed geometry that’s unnecessary for the solver. It’s a practical move that maintains quality while making the workflow smoother and less resource-intensive. Even with fewer polygons, the core likeness remains strong for a successful outcome.
- Check feature alignment and proportions: Inspect the simplified mesh to ensure eyes, nose, and lips are still correctly positioned, tweaking with sculpting tools if they’ve shifted during cleanup or decimation. Misaligned features can break the likeness, making the digital double unrecognizable or unnatural compared to the original subject. Adjusting proportions now guarantees the solver maps the template accurately, avoiding costly fixes after conversion. This quality check is a safeguard for preserving the subject’s identity in the final character.
- UVs and textures (optional): Keep or generate UVs to bake details like color or normals from the scan, allowing you to transfer them to the Metahuman later for enhanced realism. While not required for conversion, UVs enable custom texturing post-process, capturing fine skin details that the solver alone might miss. Tools like Blender’s UV unwrap or ZBrush’s UV Master make this step manageable, setting up future visual upgrades. It’s an optional boost that pays off for AAA-quality results down the line.
This process delivers a refined mesh that’s fully prepared for a smooth and precise Metahuman import.
What tools help optimize mesh geometry for Metahuman compatibility?
Several tools can optimize mesh geometry for Metahuman compatibility, each offering unique features to prepare and refine your mesh. Below are the key tools, detailed in four-line paragraphs:
- Blender: This free, open-source software excels at importing, cleaning, and decimating meshes, with Sculpt Mode and modifiers like Decimate simplifying geometry while keeping facial details sharp. It’s perfect for budget-conscious creators, letting you remove artifacts, fill holes, and assign basic materials without spending a dime. Blender’s versatility makes it a go-to for preparing meshes to meet Metahuman’s topology and polycount requirements efficiently. Its accessibility and robust toolset ensure anyone can optimize a mesh with professional results.
- ZBrush: A professional favorite, ZBrush handles high-res sculpting and cleanup with tools like ZRemesher and Decimation Master, reducing polycounts while preserving intricate facial shapes. It’s ideal for refining photogrammetry scans, smoothing noise, and filling gaps to create a clean, animation-ready mesh for Metahuman conversion. The software’s advanced sculpting capabilities allow for precise adjustments, ensuring the likeness stays intact through the process. Though it’s a paid tool, its power justifies the cost for high-end digital double workflows.
- RealityCapture / Metashape (for photogrammetry stage): These photogrammetry tools generate detailed head meshes and textures from photos, offering built-in simplification options to kickstart the optimization process. They’re the foundation for capturing real-world data, producing high-quality raw meshes that need only minor cleanup before Metahuman conversion. RealityCapture is fast and precise, while Metashape offers flexibility, both setting you up with a solid base for further refinement. Their output feeds directly into the next stages of the pipeline with minimal fuss.
- R3DS Wrap (Wrap3) or ZWrap plugin: Specialized wrapping tools like these conform clean topology onto raw scans, giving you precise control over the mesh’s structure for Metahuman compatibility. They’re perfect for high-end projects, transferring scan details to a pre-optimized base mesh that meets Epic’s requirements effortlessly. R3DS Wrap offers standalone power, while ZWrap integrates with ZBrush, both streamlining topology cleanup for flawless conversions. These tools shine in professional pipelines where accuracy and efficiency are paramount.
- MeshLab: A lightweight, free tool, MeshLab provides quick decimation, normal fixes, and basic cleanup, making it great for fast mesh optimization on a budget. It’s handy for reducing polycounts or repairing isolated vertices, ensuring the mesh meets Metahuman’s 200,000-vertex limit without much hassle. While not as feature-rich as Blender or ZBrush, its simplicity speeds up basic operations in the preparation workflow. MeshLab is a solid utility for users needing a no-frills solution to get the job done.
- TopoGun / Maya / 3ds Max: These tools offer manual retopology and cleanup with fine-tuned control, useful for custom adjustments beyond Metahuman’s immediate needs but versatile for broader projects. TopoGun specializes in topology, while Maya and 3ds Max provide all-purpose modeling power to refine meshes to exact specifications. They’re optional for Metahuman but invaluable if you’re perfecting a mesh for multiple uses or extreme detail. Their precision makes them a strong choice for artists comfortable with advanced 3D workflows.
- Substance 3D Painter or Photoshop (for textures): These texturing tools neutralize lighting or bake scan details into albedo maps, enhancing the mesh’s visual clarity for landmark tracking and post-conversion realism. Substance Painter excels at painting custom textures, while Photoshop handles quick edits, both preparing the mesh for a better solver fit. They’re key for optional texture work, letting you transfer skin details to the Metahuman for a lifelike finish. This visual boost elevates the final digital double beyond the default output.
Together, these tools cover every angle of mesh optimization, ensuring compatibility with Metahuman’s system.

How accurate is Mesh to Metahuman for creating realistic facial topology?
Mesh to Metahuman is highly accurate at creating realistic facial topology, adapting Epic’s optimized template to the custom mesh’s shape for animation-ready results that look natural and expressive. It maintains critical proportions like eye spacing or nose width, delivering a dense topology (7,000–15,000 vertices at LOD0) that supports detailed movements like smiles or blinks with smooth deformation. The automated process ensures edge loops around the eyes and mouth align perfectly, making it ideal for lifelike human faces in games or films. While minute details might need texture enhancements, its topological accuracy makes it a top choice for realistic digital doubles.
Can I preserve likeness and sculpted detail when using Mesh to Metahuman?
Yes, you can preserve likeness and sculpted detail with Mesh to Metahuman by using a high-quality input mesh and refining the output with custom textures and sculpting tweaks. The solver captures major features like cheekbones or jawlines, while projecting scan-based normal and diffuse maps adds fine details like wrinkles or pores for a closer match. Post-conversion adjustments in Metahuman Creator, like asymmetry tweaks, further enhance recognition, ensuring the digital double mirrors the subject closely. With careful input preparation and finishing work, the tool achieves a near-identical likeness suitable for AAA-quality characters.
What are the differences between a standard Metahuman and a custom AAA digital double?
Standard Metahumans and custom AAA digital doubles differ in creation, customization, and purpose, though they share the same rigging foundation. Here are the differences, each in a four-line paragraph:
- Creation Process: Standard Metahumans are built quickly in Metahuman Creator by mixing presets and tweaking sliders, offering an easy way to craft realistic but generic characters in minutes. Custom AAA digital doubles start with real-world scans or sculpts, requiring a detailed pipeline of cleanup, conversion, and texturing to replicate a specific person accurately. The standard method prioritizes speed and simplicity for broad use, while the custom approach demands more time and skill for precision. This fundamental difference shapes their roles in production workflows.
- Likeness to real individuals: Custom doubles aim to mirror specific individuals, capturing unique traits like scars or facial quirks, making them perfect for hero characters or actor replicas in high-stakes projects. Standard Metahumans, though photorealistic, are generic by design, lacking the personalized detail needed to represent a real person accurately. This makes customs ideal for recognizable leads, while standards fill out background roles or prototyping needs. The focus on individuality sets custom doubles apart for standout applications.
- Topology & Rig: Both types share the same topology and rigging system, with custom doubles adjusting vertex positions to fit scan data and standards using preset morphs for variation. This consistency ensures identical animation capabilities, letting animators use the same tools like Control Rig or mocap on either without relearning workflows. The shared structure simplifies production, as the underlying tech remains uniform despite their creation differences. It’s a practical design that keeps flexibility high across both options.
- Materials and Textures: Custom doubles often use bespoke textures from scans, reflecting precise skin details like freckles or blemishes, tailored to the subject for maximum realism. Standard Metahumans rely on pre-made library textures that look great but aren’t specific to any one person, limiting their uniqueness. This texture disparity means customs shine in close-ups, while standards offer a solid but less personalized visual base. It’s a trade-off between customization depth and out-of-the-box usability.
- Body and proportions: Both use preset Metahuman bodies, with custom doubles limited to head personalization unless you model a custom body separately, which isn’t standard practice. This means the body might not fully match the subject’s build, though the head will be spot-on for customs, while standards are fully preset from head to toe. The limitation keeps workflows streamlined but can restrict full-body accuracy for custom characters. It’s a practical compromise given the tool’s focus on facial fidelity.
- Level of detail focus: Custom doubles target hero characters, often requiring extra work like high-res textures or custom hair to meet cinematic or lead-role standards. Standard Metahumans serve broader purposes like crowds or quick prototypes, with less emphasis on individual detail and more on general realism. This makes customs resource-intensive but visually superior, while standards save time for less critical roles. The detail focus aligns with their intended use in production hierarchies.
- Limitations of uniqueness: Custom doubles can push beyond preset limits, capturing extreme features like aged skin or unusual bone structures that standard Metahumans can’t replicate fully. Standards are bound by the library’s archetypes, restricting them to a narrower range of realistic but typical designs for efficiency. This flexibility gives customs an edge for standout or niche characters, while standards stay practical for mass use. It’s a key distinction for projects needing truly unique digital humans.
Custom doubles offer precision and personalization, while standard Metahumans prioritize speed and generality.

How do I rig and animate a Metahuman digital double after mesh conversion?
Once converted, a Metahuman digital double comes fully rigged with Epic’s facial and body systems, ready for animation without any manual setup required. You can animate it using Unreal’s Control Rig to keyframe expressions and poses, retarget existing animations to the skeleton for body motion, or use facial mocap tools like Live Link Face for real-time performance capture. Downloaded via Quixel Bridge, the asset works seamlessly in both real-time and cinematic workflows, supporting everything from games to virtual production. This plug-and-play rigging saves time, letting you jump straight into bringing the character to life.
What is the best workflow for cinematic-quality Metahuman characters?
For cinematic-quality Metahuman characters, a detailed workflow combines Metahuman’s automation with custom refinements for top-notch realism. Below are the steps, each in a four-line paragraph:
- High-Quality Capture or Modeling: Start with a high-res photogrammetry scan or sculpted head in a neutral pose, capturing fine details like skin texture and proportions with pinpoint accuracy. This initial quality sets the tone, ensuring the mesh carries the subject’s likeness into the conversion process without losing critical features. Tools like RealityCapture or ZBrush help create a robust base that stands up to cinematic scrutiny. A strong foundation here is non-negotiable for a believable final character.
- Mesh to MetaHuman Conversion: Import the mesh into Unreal Engine and align it carefully, placing landmarks with precision to guide the solver into an accurate Metahuman fit. This step transforms the raw geometry into a rigged digital double, leveraging cloud processing to match it to the closest template face efficiently. Attention to detail during alignment prevents errors like skewed features that could derail the realism later. It’s the bridge from raw data to a functional character ready for enhancement.
- MetaHuman Creator Tweaks: Adjust skin tones, eye details, and body options in Metahuman Creator, using sculpting tools to fine-tune the face for a closer match to the subject’s appearance. These tweaks refine the automated output, correcting minor solver inaccuracies and boosting the likeness for cinematic use. The interface makes it easy to iterate quickly, ensuring the character feels authentic before moving to deeper customization. This step polishes the base into something more personalized and film-ready.
- Custom Texturing and Shading: Project scan textures onto the Metahuman UVs to add unique skin details like pores or blemishes, replacing default materials with bespoke ones for a lifelike look. This process captures the subject’s visual identity, enhancing realism where the solver alone falls short, especially in close-up shots. Tools like Substance Painter make it seamless to bake and refine these textures for a perfect fit. Custom shading is what elevates the character from good to breathtaking on screen.
- Hair and Grooming: Create realistic hair using tools like PixelHair or custom grooms, adding physics simulation to mimic natural movement under cinematic lighting and motion. Hair is a make-or-break detail for realism, requiring careful placement and styling to match the subject’s look convincingly. This step often takes extra effort but pays off in making the character feel alive and dynamic. A well-groomed head ties the visual package together for a polished result.
- Eyes and Dental Details: Customize eye textures for realistic wetness and color, and tweak teeth to reflect the subject’s dental traits, adding subtle occlusion for depth. These small features are critical in close-ups, where viewers notice every nuance, making or breaking the illusion of life in the character. Adjustments here use Metahuman’s built-in systems, keeping the process manageable yet impactful. Lifelike eyes and teeth sell the performance as much as the broader face does.
- Clothing and Body: Model custom clothing and adjust body proportions to fit the subject, moving beyond the preset options for a fully cohesive character design. This ensures the digital double isn’t just a head on a generic body, aligning the full figure with the intended likeness for consistency. Tools like Marvelous Designer or Maya can craft these assets, integrating them into Unreal seamlessly. A tailored body completes the cinematic presentation from top to bottom.
- Animation and Facial Performance Capture: Use Metahuman Animator or mocap tools to record lifelike expressions and body movements, capturing the subject’s performance nuances for authenticity. This brings the character to life, syncing subtle facial ticks or gestures with the high-quality visuals for a believable on-screen presence. The pre-rigged system makes this step intuitive, supporting both real-time and keyframed animation workflows. Great animation turns a static double into a compelling actor.
- Lighting and Rendering: Apply Unreal’s Lumen or Path Tracing with high LODs, tuning subsurface scattering and ray tracing to render skin, hair, and eyes with cinematic realism. Proper lighting highlights the textures and details, integrating the character into the scene with film-quality depth and reflection. This step leverages Unreal’s real-time power to achieve results rivaling traditional offline renders. It’s where the technical meets the artistic for a stunning visual payoff.
- Polish and Grading: Finish with post-processing like color grading, depth of field, or lens effects to give the render a filmic sheen, enhancing the final look for the big screen. These touches refine the raw output, adding atmosphere and professionalism that elevate the character beyond a technical achievement. Unreal’s post-process stack makes this accessible within the engine, wrapping up the workflow efficiently. Polishing ensures the digital double feels like it belongs in a Hollywood production.
This workflow blends automation with artistry, delivering a Metahuman character that meets the demands of cinematic excellence.

How do I handle skin textures and maps after converting a custom mesh?
Handling skin textures is crucial post-conversion as Mesh to MetaHuman doesn’t transfer your scan’s texture automatically, leaving the MetaHuman with default or placeholder textures that need re-authoring. So, here’s how you can manage the textures and maps to get your digital double looking right:
- Extract or recreate the skin texture (albedo)
To apply your scan’s albedo texture, bake it onto the MetaHuman UV layout using Substance Painter or Blender by importing both the MetaHuman head and your textured scan. Project or bake the scan’s texture onto the MetaHuman mesh to fit its UV islands, preserving the actor’s skin details like freckles. Clean up the baked texture in Photoshop or GIMP to fix seams if needed, ensuring a smooth appearance. Apply this texture to the MetaHuman’s face material in Unreal Engine via a Material Instance override for accurate coloration. - Blend with MetaHuman base textures
MetaHuman materials include epidermal and detail normal maps that can enhance your scan’s texture with subtle details like vasculature. Blending these base maps with your scan-based diffuse can improve realism, depending on your scan’s quality. Often, the scan’s real color is the best diffuse source, but blending can add depth. Assess your input to determine if blending enhances the final look. - Generate missing maps (specular, roughness, normal)
Since scans provide color but not roughness or normals, start with MetaHuman’s default maps and customize them for your subject. Bake normal maps from high-res geometry or paint details like pores in Substance Painter to match reference photos. Create a roughness map by painting oilier areas like the T-zone darker and glossier areas like lips nearly black, guided by the MetaHuman default. Adjust subsurface maps for accurate skin tone and translucency, tweaking parameters for realism. - Material instance settings
In Unreal, create a Material Instance override for the MetaHuman face to apply your custom albedo, normal, and roughness textures. Replace the Skin_Body base color and normal in the material slots to reflect your textures accurately. Adjust scalar parameters like “Skin Offset” or “SSS Amount” to match real-life skin response under lighting. Test in a calibrated lighting setup to ensure the face looks natural and not waxy. - Addressing seams
Ensure the neck seam between the head and body blends seamlessly by matching skin tones across both textures. Adjust the body texture or material parameters if the face texture differs significantly from the default body tone. Check the neck blend in Unreal to avoid visible lines, tweaking as needed. This step maintains a cohesive appearance across the digital double. - Eyes, teeth, hair textures
Modify eye materials by tweaking parameters or editing iris textures to match the actor’s unique eye color. Adjust teeth materials if the actor has distinct features like a gold tooth, though defaults often suffice. For hair, edit existing MetaHuman hair textures or use custom grooms if needed for color accuracy. These adjustments enhance the overall likeness beyond just skin. - Validate in Engine
Test the digital double in Unreal under various lighting conditions like daylight or harsh side light to verify skin properties. Ensure pores catch highlights and roughness reflects oily areas appropriately without baked-in lights. Tweak textures and materials based on these tests for realistic rendering. This validation confirms the skin looks lifelike in-engine. - Possible use of character-specific shaders
For top-tier realism, consider custom shaders to simulate effects like peach fuzz or subdermal veins not covered by default MetaHuman shaders. While MetaHuman’s skin shaders are advanced, bespoke shaders can meet specialized high-end needs. Evaluate if your project requires this extra fidelity for close-up shots. Implement only if necessary to enhance specific skin characteristics.
Mesh to MetaHuman provides geometry but requires manual texture refinement for true likeness. This reauthoring process is essential for a high-quality digital double capturing the actor’s exact features.
Can I use Mesh to Metahuman for actors in virtual production or film?
Mesh to MetaHuman excels in virtual production and film by quickly creating rigged digital doubles within Unreal Engine, ideal for stunts, de-aging, or fantastical scenes. Its real-time rendering speeds up visualization, enabling on-set decisions for VFX like stunt coordination or LED wall content within hours or days. The MetaHuman Animator enhances this with near real-time facial animation from captured performances, offering directors instant feedback during shoots. Quality-wise, custom-textured MetaHumans meet film standards with robust rigs, as seen in Epic’s demos and shorts like “Lightfall.” For smaller productions, it provides final assets or placeholders exportable to tools like Maya for refinement.
In virtual production, MetaHumans ensure consistency with live actors via performance capture, perfect for background doubles or LED wall shoots. Real-time puppeteering lets directors tweak scenes live, boosting creativity for previs or principal VFX shots. In film, it supports traditional pipelines by bridging live and digital elements seamlessly. This tool revolutionizes workflows, making high-quality, animatable digital humans accessible for both indie and AAA projects, enhancing storytelling with lifelike doubles.

How do I sync mocap or facial tracking data with a Metahuman digital double?
Syncing mocap or facial tracking to a MetaHuman involves driving its skeletal and facial rigs with external data. Here’s how to do it:
- For Body Mocap
Obtain body mocap data from suits like Xsens or FBX files and retarget it to the MetaHuman skeleton using UE5’s IK Retargeter, mapping bones like “hip” to “hip.” For real-time, use Live Link with plugins like Xsens to stream motion into Unreal, driving the MetaHuman via a Live Link Pose node. Adjust proportions in the IK Retargeter if the MetaHuman differs from the mocap actor, ensuring accurate motion. Test and refine with IK to keep feet grounded and movements natural. - For Facial Mocap/Tracking
Use Live Link Face with an iPhone to capture ARKit blendshapes, mapping them to the MetaHuman facial rig in the Animation Blueprint for real-time facial movement. Alternatively, MetaHuman Animator processes iPhone or helmet cam footage with calibration poses to generate high-quality facial animation assets quickly. Other solutions like Faceware can drive the rig if mapped to MetaHuman controls, though ARKit options are simpler. Ensure body and face data sync via timecode or simultaneous capture, aligning them in Sequencer if needed.
Syncing involves retargeting body motion and feeding facial data, enabling real-time mirroring of an actor’s performance for virtual production or cinematic animation.
Can I use Mesh to Metahuman with full body scans or only heads?
Mesh to MetaHuman focuses solely on heads, not full body scans, as it processes only facial geometry and rigs it to preset MetaHuman bodies. Even with a full body scan, only the head is used, and you must select a body template separately, limiting exact body replication. This design prioritizes facial likeness and simplifies rigging with modular bodies of fixed proportions. Auto-rigging custom bodies isn’t supported, though advanced users can manually rig a scanned body, a complex task. For closer body matches, pick a preset resembling the actor’s build or adjust it minimally.
Workarounds include projecting scan textures onto the preset body for details like tattoos or using scan measurements to tweak proportions, enhancing accuracy within limits. Epic hasn’t expanded the tool for full body integration yet, so head-only conversion remains standard. This approach suits projects prioritizing facial fidelity over body precision, common in virtual production or film.
How do I blend high-resolution scans with Metahuman body templates?
Blending a high-resolution head scan with a MetaHuman body ensures a seamless neck join and cohesive look. Here’s how to blend them effectively:

- Neck Seam Alignment
MetaHuman merges the head and body, scaling the head to fit the body’s neck during conversion, but slight mismatches may occur. Choose a body preset in MetaHuman Creator matching the subject’s build to align neck thickness and shoulder width. Adjust neck girth indirectly via body type options if the seam isn’t perfect. This ensures the head fits proportionally without visible disconnects. - Skin tone and texture continuity
Match the face and body skin tones by adjusting the body texture’s hue in Photoshop to align with the custom face texture. Ensure the neck areas of both textures blend in color and lighting to avoid lines. Edit the body diffuse for visible features like blemishes if needed, maintaining uniformity. MetaHuman’s material blending helps, but manual tweaks perfect the continuity. - Use of body masks
If a seam persists, paint a blurred mask or adjust the “Neck Depth” parameter in Unreal’s material instance to smooth the transition. This aligns face and body normals at the boundary for a seamless look. Check the material settings to ensure no drastic detail differences. This step refines the blend where automated merging falls short. - Matching proportions
Adjust the Height slider and body type in MetaHuman Creator to match the scan’s head-to-body ratio, as automatic scaling may not fully align. If proportions still seem off, minor skeletal tweaks can refine the fit, though usually unnecessary with the right template. Compare the scan’s proportions to the MetaHuman for accuracy. This ensures the head doesn’t appear oversized or undersized. - Custom body features
Incorporate specific features like tattoos by editing the MetaHuman body diffuse map with details extracted from the scan. Place these accurately on the texture to reflect the subject’s unique marks. This enhances the body’s likeness beyond preset limits. Ensure edits align with visible areas for realism. - Clothing considerations
If clothing hides most of the body, focus skin blending on exposed areas like neck and hands, matching tones across all parts. This minimizes effort where coverage reduces visibility. Ensure consistency in visible skin for cinematic quality. Clothing can mask minor mismatches effectively. - Testing the blend
Test in a 3-point light or HDRI setup to check the neck seam and overall unity, adjusting textures if lines appear. Increase subsurface scattering blur as a last resort to hide minor issues, though texture matching is preferred. Verify the head and body integrate seamlessly under scrutiny. This confirms a cohesive digital double.
Blending involves making the neck invisible and unifying skin and shape, achievable with texture edits and parameter tweaks for a convincing result.
How can PixelHair be used to add high-fidelity hair to AAA Metahuman digital doubles created with Mesh to Metahuman?
PixelHair enhances MetaHuman digital doubles with realistic, pre-made 3D hairstyles exportable to Unreal Engine. Here’s how you can use PixelHair in your workflow:
- Choose or create a hairstyle
Select a PixelHair style matching your character’s hair, like braids or curls, from marketplaces like BlenderMarket or ArtStation. If no exact match exists, choose a customizable option to adapt to the scan’s hairstyle. These assets offer high-fidelity options beyond MetaHuman defaults. Ensure the style aligns with your actor’s look for likeness. - Blender grooming and PixelHair setup
PixelHair assets in Blender use a particle hair system with a hair cap mesh as a scalp proxy for strand placement. Open the file to see the groomed hair on a default head, ready for adjustment. The cap’s 18k polygons allow detailed fitting to various heads. This setup simplifies high-quality hair creation. - Fitting the hair to your MetaHuman head
Import your MetaHuman head into Blender and use a shrinkwrap modifier to conform the PixelHair cap to its scalp shape. Position the cap accurately to match the hairline and head contours with a few clicks. This ensures the hairstyle fits your specific character perfectly. The dense cap adapts well to different sizes. - Export the hair groom to Unreal
Export the fitted hair as an Alembic groom from Blender, preserving strand details for Unreal import. PixelHair guides typically recommend Alembic for accuracy over FBX options. Use Unreal’s Groom Plugin if converting curves is needed, following export steps. This prepares the hair for engine integration. - Import as Groom Asset
In Unreal, import the Alembic file to create a Groom Asset with strands and a skeletal binding asset. Apply a provided PixelHair material or use MetaHuman’s default hair material with adjusted colors. Assign the material to the groom for realistic rendering. This step finalizes the hair’s appearance in-engine. - Attach to MetaHuman
In the MetaHuman Blueprint, add a Groom Component with the PixelHair asset and attach it to the head bone, disabling default hair if present. Adjust the transform if alignment is off, though shrinkwrap usually ensures a good fit. This integrates the custom hair seamlessly. Remove any preset hair from Creator settings. - Adjust and optimize
Tweak the hair material’s roughness or color to match facial hair or eyebrows for consistency across the character. Set up LODs in Unreal to optimize performance, reducing strand count at distance while keeping quality for close-ups. Check alignment and adjust as needed in-engine. This balances realism and efficiency for AAA use.
PixelHair offers detailed, customizable hair that elevates MetaHuman realism, making it a key tool for matching an actor’s hairstyle accurately.

What are common issues when using Mesh to Metahuman for high-end production?
Here are common issues and solutions when using Mesh to MetaHuman for AAA results:
- Tracking/artifact issues around eyes or features
Improper tracking can occur if the scan has closed eyes or overlapping hair, causing artifacts like protruding eyes. Ensure eyes are open with eyeball texture and clean the mesh of extraneous data before conversion. Manually guide trackers in the Identity asset if artifacts persist. This prevents odd distortions in the fitted mesh. - Identity Solve errors or failures
Conversion may fail due to high polycount or non-manifold geometry, requiring a mesh below 200k vertices and neck-down removal. Log into Quixel Bridge to avoid server issues during the solve process. Decimate and prepare the mesh properly to ensure success. This resolves most error messages encountered. - Hair not appearing or MetaHuman appears bald
Hair may vanish due to missing LODs or naming bugs, fixable by setting Forced LOD to 0 in the Blueprint or using ASCII names. Choose hair with full LOD support if issues persist with newer grooms. Check rendering distance to ensure visibility. This ensures hair displays consistently in-engine. - Facial rigging limitations (likeness nuances)
The standardized rig may not capture subtle traits like eyelid folds perfectly, affecting exact likeness. Add custom normal maps or blendshape correctives in ZBrush to refine expressions and profiles. Re-import tweaks via “From Template Mesh” if needed. This enhances precision for high-end needs. - Asymmetry loss
Micro-asymmetry might reduce in the solve; retain it by avoiding pre-symmetrization of the mesh. Toggle off “Symmetry” in Creator to adjust one side manually if needed. Check results to ensure unique features carry over. This preserves the actor’s distinct facial traits. - Neck seam or color mismatch
A visible neck seam can appear if head and body tones differ, requiring texture adjustments to blend them. Equalize normal map details at the seam to avoid abrupt changes. Paint the neck area for continuity if necessary. This eliminates noticeable lines for a unified look. - Uncanny valley or expression issues
Expressions may feel robotic without tuning, as blendshapes are standardized; use MetaHuman Animator for authentic motion. Add animated wrinkles via normal maps to enhance specific expressions like smiles. Polish animation to reduce uncanny effects. This ensures lifelike performance in motion.
Anticipating these challenges with proper preparation and fixes optimizes Mesh to MetaHuman for high-end production.
What studios or AAA projects are using Mesh to Metahuman for digital doubles?
ICVR’s “Lightfall” short film exemplifies Mesh to MetaHuman’s use, crafting a realistic character for cinematic quality, proving its AAA potential. Epic’s tech demos, like scanning Melina Juergens at GDC 2023, suggest adoption by studios like Ninja Theory for projects such as “Hellblade II,” leveraging rapid digital double creation. “The Matrix Awakens” used MetaHumans for crowd NPCs, showing Mesh to MetaHuman’s role in large-scale AAA environments. Big VFX houses like Lucasfilm might employ it for previs or background doubles in virtual production, though principal characters often use custom pipelines.
Smaller studios and indies embrace it for cost-effective character creation, evident in community tutorials and YouTube showcases. Game developers utilize it for prototyping or trailers, benefiting from Unreal’s free licensing. Virtual influencers and avatar projects may scan celebrities for real-time ads. As adoption grows, Epic will likely highlight more high-profile uses, expanding its footprint in AAA and beyond.

Where can I find tutorials or case studies on creating AAA Metahuman digital doubles?
Epic’s MetaHuman Documentation offers a Quick Start guide for Mesh to MetaHuman, covering mesh import and Identity asset setup for beginners and pros alike. Their Community Learning includes the “Lightfall” tutorial by ICVR, detailing a real photogrammetry-to-MetaHuman pipeline. YouTube hosts Epic’s “Using Mesh to MetaHuman in UE5” and creator content from JSFilmz and Pixel Profi, breaking down workflows step-by-step. 80.lv articles, like “Creating a Historically Inspired Young Girl,” dive into advanced techniques for detail retention and artistry.
Unreal Engine forums and Reddit (r/unrealengine, r/meta_humans) provide troubleshooting and user tips on Mesh to MetaHuman challenges. Platforms like GameDev.tv or Gumroad offer courses embedding MetaHuman in character creation. YelzKizi’s PixelHair tutorials enhance hair integration skills. Epic’s blog and Unreal Fest talks, such as “Pushing the MetaHuman Likeness Limits,” round out resources for mastering AAA digital doubles.
FAQ
- What is Mesh to MetaHuman and why is it useful?
Mesh to MetaHuman is a feature in Unreal Engine 5’s MetaHuman plugin that transforms your own 3D head mesh whether scanned or sculpted into a fully rigged, photorealistic character automatically. By leveraging Epic’s cloud-based landmark solver, it aligns your mesh to a MetaHuman face template and attaches it to a pre-designed body, saving you hours of manual rigging and shader setup. This makes high-quality digital doubles accessible even if you’re not a rigging expert. - What quality of input mesh do I need for AAA-level results?
Aim for a clean, neutral-expression head scan or sculpt with open eyes and good mesh detail (up to ~200,000 vertices). Remove artifacts like stray hair strands, unwanted geometry (shoulders, clothing), and fill holes so the solver can accurately detect facial landmarks. Higher-resolution scans capture fine wrinkles and pores, which you can bake into textures later for the best cinematic fidelity. - Which file formats and topology guidelines should I follow?
Export your head mesh as FBX or OBJ OBJ is preferred for very dense scans. Ensure it’s a single, watertight mesh of only the head and a bit of neck (no eyeballs, teeth, or shoulders). Keep it under 200k vertices for speed, use a basic albedo texture to aid landmark tracking, and position it at the origin facing forward in your 3D app before import. - How does the cloud-based conversion process work?
After importing your mesh in Unreal, the plugin uploads it to Epic’s cloud servers, where an automated solver identifies key facial landmarks and matches your scan to the closest face shape in the MetaHuman database. Within minutes, it returns a rigged character asset complete with body, facial bones, blendshapes, and high-quality skin shaders ready for download and use in your project. - Can I keep my scan’s unique facial details?
Yes. While the solver gives you a perfectly rigged base, you should bake your scan’s high-resolution normal and diffuse maps onto the MetaHuman UVs using Substance Painter or Blender. Then, override the default MetaHuman albedo, normal, and roughness maps in a Material Instance to restore pores, freckles, scars, and other defining traits. - How do I handle hair and eyelashes after conversion?
MetaHuman provides generic groomed hair and lashes, but for a bespoke look you can use tools like PixelHair or custom Groom assets. Export your groom from your DCC (e.g., Alembic from Blender) and import it with Unreal’s Groom plugin. Attach it to the MetaHuman scalp bone, disable the default hair, and adjust LOD and physics settings for both performance and realism. - What lighting and rendering settings best showcase a digital double?
Use UE5’s Lumen or Path Tracer with high-resolution textures and maxed-out skin subsurface scattering. Set up a three-point light rig or HDRI environment for realistic shadows and reflections. Tweak the Material Instance’s “Skin Offset” and “SSS Amount” parameters until the skin reads naturally under varied lighting conditions. - Can I drive my MetaHuman with mocap or facial capture?
Absolutely. For body, retarget standard mocap FBX clips or stream Live Link data from suits like Xsens into the MetaHuman skeleton. For face, use Live Link Face on an iPhone or MetaHuman Animator to record ARKit blendshapes then plug them into the character’s Animation Blueprint for real-time, performance-driven expressions. - What do I do if my conversion fails or has artifacts?
First, confirm your mesh meets requirements: under 200k verts, single head-only mesh, open eyes, neutral expression, correct file format. If you still see distortions around eyes or jaw, clean up non-manifold edges and re-export, then re-submit. You can also manually adjust landmarks in the Identity asset or retry with a slightly simplified mesh. - Where can I learn more or find step-by-step tutorials?
- Epic’s Official Docs: “Mesh to MetaHuman” workflow guide on docs.unrealengine.com
- Community Learning: ICVR’s “Lightfall” case study on Unreal forums
- YouTube: “Using Mesh to MetaHuman in UE5” by Epic Games and creator channels like JSFilmz
- 80.lv & ArtStation Articles: In-depth breakdowns of custom digital-double pipelines

Sources:
- Epic Games – MetaHuman Plugin and Mesh to MetaHuman announcementunrealengine.comunrealengine.com
- Epic Developer Community Documentation – MetaHuman from Mesh Workflows and Requirementsdev.epicgames.comdev.epicgames.comdev.epicgames.com
- Epic FAQ – Platform support and known issues (hair LOD, naming)dev.epicgames.comdev.epicgames.com
- ICVR Lightfall Tutorial (Epic Community) – Photogrammetry to MetaHuman case studyforums.unrealengine.com
- 80.lv Interview – Integrating Mesh to MetaHuman in character pipeline80.lv
- BlenderMarket – PixelHair product description (using with MetaHumans)blendermarket.comblendermarket.com
- CGChannel – MetaHuman Animator (facial capture quality for AAA)cgchannel.comcgchannel.com
- JuegosStudio Blog – Overview of Mesh to MetaHuman featuresjuegostudio.com
- Reddit and Forum Community Tips – Polycount limits and error fixesreddit.com
Recommended
- Managing Camera Settings and Reducing Scene Clutter in Blender Projects with The View Keeper
- The Outer Worlds Mods – Best Mods, Installation Guide, and Community Insights
- Intergalactic: The Heretic Prophet Controversy – Exploring the Backlash and Public Discourse
- How to Retarget Mixamo Animations to Metahuman in Unreal Engine Using the IK Retargeter
- How do I create a bird’s-eye camera view in Blender?
- How to Use Film Grain: A Complete Guide to Achieving a Cinematic Look
- Optimizing Blender Camera Settings with The View Keeper Add-on
- How do I lock a camera to an object in Blender?
- Best Places to Buy Unreal Engine Metahuman Assets: Top Marketplaces, Reviews, and Expert Tips
- How do I create a camera shake effect in Blender?