Converting a real human into a MetaHuman character using photogrammetry is an exciting new workflow made possible by Unreal Engine 5. This guide will walk you through the entire process of real human to MetaHuman photogrammetry, from capturing a 3D scan of a personโs face to generating a fully rigged MetaHuman ready for animation. Weโll cover the best methods for creating 3D characters, how photogrammetry fits in, the optimal workflow and tools (including Mesh to MetaHuman and PixelHair), as well as tips, common mistakes, and answers to frequently asked questions. Whether youโre a beginner or an expert, this comprehensive guide will help you achieve professional results in bringing real people into the digital world.
What are the best ways to make 3D characters for games and animation?
There are several popular approaches to creating 3D characters for games, film, and animation
Various methods exist for crafting 3D characters, each suited to different needs and resources, balancing control, speed, and realism. Hand modeling and sculpting in tools like Maya, Blender, or ZBrush offer full creative freedom but demand time and skill, requiring manual retopology, texturing, and rigging. Procedural tools like MetaHuman Creator or Reallusion Character Creator enable rapid character generation via sliders and presets, producing realistic results quickly, though limited by available options. 3D scanning, using photogrammetry or lidar, captures a real personโs likeness with high accuracy, needing equipment and post-processing but ideal for digital doubles. Hybrid approaches combine these, such as scanning a head and adjusting it manually or enhancing a procedural base with sculpting.
Photogrammetry paired with MetaHuman exemplifies a hybrid method, merging scan realism with MetaHumanโs rigging and materials for efficient, high-quality results in Unreal Engine 5. This guide focuses on that workflow, turning a real human into a MetaHuman character. Each approach has strengths: hand modeling for unique designs, procedural tools for speed, and scanning for precision. The choice depends on project goals, with scanning and MetaHuman offering a powerful blend for lifelike characters. This versatility makes 3D character creation accessible across industries.

Can I turn a real human into a Metahuman using photogrammetry?
Yes. With modern tools, you can convert a 3D scan of a real person into a MetaHuman character. Epic Games introduced the Mesh to MetaHuman feature in Unreal Engine 5 specifically to enable this workflow. By scanning a personโs face (via photogrammetry or other methods) and importing the mesh into Unreal, you can use the MetaHuman Pluginโs Mesh to MetaHuman component to transform that scan into a fully rigged MetaHuman. This MetaHuman can then be animated just like the ones created directly in MetaHuman Creator.
This process became publicly available in 2022 and has opened up new possibilities for creating digital doubles of real individuals. For example, Epicโs MetaHuman team demonstrated this by scanning people and converting them to MetaHumans with impressively similar appearance and facial structure.
In short, photogrammetry provides the 3D model, and MetaHuman provides the rig, materials, and animation capabilities to bring that model to life. Itโs important to note that Mesh to MetaHuman currently focuses on the head and face. The scan of a personโs head is fitted to MetaHumanโs facial rig, while you choose a preset body type for the character. Hair and other details are handled separately (more on that later). Still, the core answer is yes: you can take a real humanโs likeness captured via photogrammetry and turn it into a MetaHuman in Unreal Engine 5.
What is the best workflow for converting a human scan into a Metahuman?
Converting a human scan into a MetaHuman requires a structured process to ensure quality and compatibility. Hereโs the streamlined workflow:
- Photograph the Person for Photogrammetry
Capture 50โ100 high-quality photos of the personโs head and shoulders from all angles, using consistent, shadow-free lighting. Instruct them to hold a neutral expression with mouth closed throughout the 360-degree shoot. Use a DSLR or smartphone, taking images at multiple heights for full coverage. This ensures a detailed, accurate base for the 3D reconstruction. - Create a 3D Mesh via Photogrammetry
Import photos into software like RealityCapture or Metashape to generate a textured, high-poly mesh of the head. The software aligns images and constructs a detailed model reflecting facial shape and skin texture. Smartphone photos can work, but higher resolution yields better results. Export the mesh with its albedo texture for further processing. - Clean and Prepare the Scan Data
Remove background geometry, fill holes (e.g., under chin), and smooth distortions in tools like Blender or ZBrush. Simplify or erase hair, ensuring a smooth head shape, as MetaHuman adds hair separately. Keep eyes open and defined in the mesh and texture to avoid tracking issues. Reduce polycount to a few hundred-thousand triangles and export as OBJ with the neutral skin texture. - Import the Mesh into Unreal Engine
Enable the MetaHuman Plugin in Unreal Engine 5 and import the cleaned OBJ mesh and texture. Static meshes are fine, as MetaHuman handles rigging. Ensure the import preserves UVs for texture application. This sets up the mesh for MetaHuman conversion. - Create a MetaHuman Identity Asset
In Unreal, create a MetaHuman Identity asset and add the imported mesh as a component under โComponents from Mesh.โ This links your scan to the Identity system. Set a neutral pose within the asset for processing. It prepares the scan for facial fitting. - Align and Track Facial Landmarks
Open the Identity editor, adjust auto-placed landmark curves (eyes, nose, lips) on the mesh for precision, using unlit mode if textured. Spend extra time aligning eyes and mouth for likeness accuracy. Rotate lighting to verify feature placement. Accurate landmarks ensure a faithful fit to the MetaHuman template. - Run Mesh to MetaHuman (Identity Solve)
Click โMetaHuman Identity Solveโ to fit the MetaHuman topology to your scan, then submit via โMesh to MetaHuman.โ Choose a matching body type during the cloud process, which takes minutes. The service warps a template mesh to your scanโs shape. This generates a rigged MetaHuman based on your input. - Refine in MetaHuman Creator
Open the processed MetaHuman in MetaHuman Creator via Quixel Bridge to customize hair, eyes, and skin settings. Replace scanned hair with a matching style and tweak facial shapes subtly if needed. Adjust skin tone and roughness to align with the original scan. Save the refined character for export. - Download the MetaHuman to Unreal Engine
Use Quixel Bridge to import the rigged MetaHuman, complete with meshes, materials, and control rig, into Unreal. The character is now game-ready with your custom face. Verify the import includes all assets like hair and clothing. This finalizes the digital double for use. - Apply Custom Textures or Details (if needed)
To retain scan details like moles, project the photogrammetry texture onto the MetaHuman face mesh in an external tool. Create a material instance override in Unreal to apply it, or bake normal maps from the scan for added realism. Export both meshes using a console command to align them. This enhances likeness beyond the default settings. - Animate and Use the MetaHuman
Animate the MetaHuman with MetaHuman Animator or existing animations, leveraging its full rig for face, body, and fingers. Test facial movements with ARKit or control rig to ensure likeness in motion. The character integrates seamlessly with Unrealโs animation tools. This brings the digital double to life.
This workflow transforms a scan into a rigged MetaHuman efficiently. It combines photogrammetryโs realism with MetaHumanโs animation-ready framework for a lifelike result.

Which photogrammetry software works best for capturing real human faces?
Several photogrammetry tools excel at reconstructing human faces, each with strengths for MetaHuman workflows:
- RealityCapture (Capturing Reality)
Owned by Epic Games, RealityCapture offers fast, accurate processing of numerous images into detailed 3D face models. Its speed and integration with Unreal make it ideal for MetaHuman scans, as shown in Epicโs tutorials. Free for hobbyists, it handles smartphone photos effectively for high-fidelity results. This accessibility and quality make it a top choice. - Agisoft Metashape
Metashape, a commercial tool, captures fine facial details reliably, often used in professional actor scans. Slightly slower than RealityCapture, it still delivers high-quality meshes with robust alignment tools. It excels with multi-camera rigs for precise reconstructions. Its proven track record suits detailed projects. - 3DF Zephyr
Available in free and paid versions, Zephyr provides a user-friendly interface for face scans with competitive quality. The free edition limits photo count but suffices for head scans with careful planning. Community tutorials demonstrate its use with Unreal workflows. Itโs a practical option for beginners. - Meshroom (AliceVision)
This open-source tool is free and effective with enough images, though it requires settings tweaks for optimal results. It struggles with hair but produces usable face scans for budget-conscious users. Refinement in Blender may be needed post-processing. Meshroom offers a cost-effective entry point. - Mobile Photogrammetry Apps
Apps like Polycam or RealityScan use phone cameras for convenient scans, uploading to the cloud for model creation. Resolution is lower than desktop tools, requiring extra cleanup for MetaHuman use. Theyโre handy for quick tests but lack fine detail. These suit casual or equipment-limited users.
RealityCapture stands out for its speed, Epic integration, and free access, making it the preferred choice for MetaHuman face scans. High-quality input photos are key across all tools for best results.
How do I prepare photogrammetry data for use with Mesh to Metahuman?
Preparing a photogrammetry scan for Mesh to MetaHuman ensures smooth conversion. Here are the steps:
- Export in a Compatible Format
Save the scan as an OBJ file, preferred for high-poly meshes over 200k vertices, to avoid slow FBX imports. Ensure UVs are included for texture mapping. This format optimizes Unreal Engine import speed. It sets the foundation for MetaHuman processing. - Include a Skin Texture (Albedo)
Export the scan with a neutral skin texture from the photogrammetry software to aid feature tracking. The albedo should reflect the face under even lighting, with eyes and skin distinct. This helps MetaHuman recognize facial landmarks accurately. A proper texture enhances the initial result. - Eyes Open and Clearly Defined
Ensure the mesh has open eye areas with visible separation, avoiding closed or empty sockets that cause tracking errors. Sculpt eye shapes if needed and texture them with sclera details. Placeholder spheres can guide trackers if included. Clear eyes are critical for fitting. - Neutral Expression and Pose
Confirm the scan shows a neutral expression with closed mouth and a forward-facing pose. Adjust in modeling software if the expression varies, aligning to the +X axis. This matches MetaHumanโs template for accurate rigging. A neutral base prevents alignment issues. - Remove Unnecessary Parts
Trim background artifacts, floating bits, or excess body parts, keeping the head and some neck. Smooth or remove hair geometry, as MetaHuman adds it separately. This focuses the mesh on facial data. A clean scan improves processing efficiency. - Flat Lighting for Untextured Meshes
For untextured scans, ensure capture used flat, diffuse lighting to help feature detection. Use Unrealโs Lit mode to adjust lighting during setup if needed. Textured scans are preferred for better results. This fallback aids tracking without a texture. - Scale (Optional)
Adjust the mesh scale to roughly 25 cm tall in Unreal, matching real-world proportions. Correct import scale if the head appears too small or large. Proper scale aids body integration, though MetaHuman adapts. This ensures proportional consistency. - Materials for Eyes and Skin (Alternative)
If lacking a texture, assign a white material to eye areas and a skin-toned one to the face. This differentiates regions for tracking when albedo is unavailable. Itโs a backup to a real texture. Clear material separation supports the process.
A well-prepared scan with these steps enhances MetaHuman conversion accuracy. It minimizes manual fixes by optimizing the mesh and texture for Unrealโs automated fitting.

What camera setup do I need to scan a real human for Metahuman conversion?
Capturing a human for MetaHuman via photogrammetry requires a camera setup tailored to detail and consistency. Options range from professional to accessible:
- Multi-Camera Photogrammetry Rig
Use 20โ30 synchronized DSLR cameras in a dome for head scans, firing simultaneously for instant capture. This prevents movement errors, ideal for pros with 50+ cameras for full-body scans. It ensures sharp, consistent images from all angles. High data volume yields top results. - Single-Camera Method (Sequential)
With one DSLR or phone, take dozens of photos around the head, keeping the subject still to avoid blur. Use fast shutter speeds and a logical sequence (side to side, high to low). Diffuse lighting prevents shadows, critical for reconstruction. Itโs accessible but demands patience. - Tripod and Turntable (not common for humans)
While turntables suit objects, for humans, move the camera around a still subject, optionally with a tripod. Hand-holding works if focus stays sharp on the face. This method is slower but viable with care. Stability ensures usable shots. - Lighting Setup
Employ soft, even lighting with softboxes or reflectors indoors, or overcast conditions outdoors. Avoid harsh shadows that confuse algorithms. Consistent illumination across angles is essential. Good lighting enhances scan quality. - Camera Type and Lenses
Use a DSLR with a 35mmโ85mm lens (e.g., 50mm prime) for minimal distortion and sharp detail. Smartphones work with the primary lens, avoiding wide-angle warping. High resolution boosts fidelity. Quality optics improve reconstruction. - Stability and Focus
Set a small aperture (f/8) for depth of field, keeping ISO low to reduce noise. Use fast shutter or diffused flash to prevent blur from movement. Focus on eyes and face ensures clarity. Stability minimizes errors. - Marker or Dots (optional)
Add random pattern dots on uniform skin to aid software alignment, removing them from texture later. Natural features like pores usually suffice for faces. This optional step enhances tracking if needed. Itโs rarely necessary with good photos. - Subject Comfort
Use a chair or headrest to keep the subject still during multi-minute shoots, allowing blinks between shots. Comfort reduces movement, maintaining pose consistency. Clear communication helps them hold steady. This supports a clean capture.
Multi-camera rigs offer the best results, but a single camera with careful execution works for MetaHuman scans. Image quality and subject stillness are key across setups.
Can I use a smartphone to create a 3D scan of a human for Metahuman?
Yes, smartphones can create 3D scans for MetaHuman, leveraging their cameras and apps, though with limitations. High-end phones (e.g., iPhone, Samsung) offer viable options for photogrammetry:
- Photogrammetry with Phone Photos
Take dozens of overlapping photos around the head with the primary camera, using manual settings for low ISO and steady shots. Good lighting and a still subject are crucial to avoid noise or blur. Process images in RealityCapture or Meshroom on a PC for a detailed mesh. This budget method yields decent results with effort. - Smartphone 3D Scanning Apps
Apps like RealityScan or Polycam generate models from photos or LiDAR (on iPhone Pro), uploading to the cloud. Photogrammetry modes offer better detail than low-res LiDAR scans of head shapes. Cleanup is needed due to lower fidelity. Theyโre convenient for quick scans. - Dedicated Depth Sensors on Phones
iPhoneโs TrueDepth (front) or LiDAR (rear) sensors capture depth, with apps like Bellus3D using TrueDepth for faces. LiDAR provides rough shapes but misses fine details like wrinkles. These scans need refinement for MetaHuman use. Depth data aids basic modeling. - Quality Considerations
Phone scans lack the fine detail (e.g., pores) of DSLRs, requiring manual touch-ups for lumps or texture. They excel at overall shape but may need enhanced normal maps. Results are usable with extra work. Quality trades off with accessibility. - Workflow for Using Phone Scan
Export the mesh as OBJ/FBX, clean it (e.g., sculpt open eyes), and import into Unreal for MetaHuman conversion. Use the scan texture as reference or rely on MetaHumanโs defaults. Refinement ensures compatibility. This integrates phone data into the pipeline. - Stability
Smartphonesโ light weight aids movement, but burst mode or video prevents blur in bright light. Steady hands or a tripod enhance precision. Rapid capture reduces subject movement errors. Stability boosts scan reliability.
Smartphones make scanning accessible, producing convincing MetaHuman results with cleanup. Theyโre less precise than pro setups but effective for hobbyists.

How do I clean and retopologize a human scan for Unreal Engine?
Post-photogrammetry scans need cleaning and optimization for Unreal, though MetaHuman simplifies retopology:
- Cleaning the Scan
Remove floating artifacts and fill holes (e.g., nostrils, ears) in Blender or ZBrush for a watertight mesh. Smooth bumpy surfaces and sculpt distorted features like ears if incomplete. Trim to head and neck, cleaning texture for neutral lighting without shadows. This ensures a clean, accurate base. - Retopologizing
MetaHuman auto-fits its topology, so manual retopology isnโt needed; decimate dense scans (e.g., 10M to 0.5M triangles) for handling. For non-MetaHuman use, tools like ZRemesher create animation-ready quads. R3DS Wrap can fit a basemesh if custom rigging is desired. Focus on reasonable polycount for MetaHuman.
Cleaning removes errors, while MetaHuman handles topology, streamlining the process. A prepared scan ensures high-fidelity conversion with minimal manual effort.
How does the Mesh to Metahuman feature convert 3D scans into rigged characters?
Mesh to MetaHuman transforms a static scan into a rigged character efficiently. Hereโs the process:
- Landmark Detection and Fitting
The plugin auto-detects facial landmarks (eyes, nose, mouth) on the scan, adjustable manually for precision. These align the scan to a standard template. Accurate placement ensures shape fidelity. This maps the scanโs features for fitting. - MetaHuman Topology Template
A standardized MetaHuman face mesh warps to match the scanโs shape, guided by landmarks. Its animation-ready topology integrates with the rig. This creates a conformed mesh with your scanโs form. It automates topology creation. - Combining with a Body
Select a preset body type, and the system attaches the fitted head, smoothing the neck seam. This forms a full character mesh. The body uses MetaHumanโs library options. It completes the digital human. - Upload to Cloud and Database Matching
Data uploads to the cloud, matching the scan to a database MetaHuman for rig generation. It blends your mesh with similar presets. Processing takes minutes. This finalizes the character asset. - Rigging and Skin Weighting
The output includes a fully rigged face (150+ controls) and body, weighted automatically. No manual binding is needed. It inherits MetaHumanโs skeletal setup. This enables immediate animation. - Transfer of Surface Details (Deltas)
Unique scan details (e.g., asymmetry) transfer as shape offsets to the rig. These preserve individuality in motion. It adds personal traits atop the base. This enhances likeness accuracy. - Textures and Materials
The scanโs texture applies initially, but hair and skin use MetaHuman shaders, requiring reauthoring. Default materials enhance realism. Customization occurs later. This sets the visual foundation. - MetaHuman Creator Integration
Post-cloud, refine the character in MetaHuman Creator with editing tools. It joins the MetaHuman database for tweaking. Adjustments perfect the look. This bridges scan to final form. - Downloadable Asset
Download via Quixel Bridge as a complete Unreal asset with rig, textures, and grooms. Itโs ready for use like any MetaHuman. All components are included. This delivers the final character.
Mesh to MetaHuman automates rigging and topology fitting swiftly. It turns a scan into an animatable digital human in minutes, leveraging cloud processing.

What are the limitations of using photogrammetry for Metahuman generation?
Photogrammetry with MetaHuman has notable constraints despite its strengths. Here are the key limitations:
- Hair is Not Captured Well
Photogrammetry produces noisy hair meshes, unusable for MetaHuman, requiring separate groom assets. Use a cap during scanning and recreate hair later. Exact hairstyles depend on artistic skill. This limits direct hair replication. However Pixelhair can serve as a great fix for this. - Facial Expression Must Be Neutral
Scans need a neutral face; non-neutral expressions misalign the rig, losing dynamic wrinkles. Unique expression lines require manual sculpting. Only static shape transfers. This restricts expression capture. - Texture Lighting and Color Differences
Scan textures with baked lighting clash with MetaHumanโs flat-lit shaders, often needing rework. MetaHumanโs style may override scan details. Texture calibration is necessary. This affects seamless texture use. - Resolution and Detail Limits
Medium-frequency shape transfers, but fine details like pores need custom normal maps. MetaHumanโs generic details replace scan specifics. Micro-detail individuality may fade. This caps automatic detail retention. - Must Fit MetaHuman Topology
Unusual features may adjust to fit MetaHumanโs rig, slightly altering the likeness. It approximates within human norms. Exact replication isnโt guaranteed. This can smooth unique traits. - No Full Body Scanning (Yet)
Only heads convert; body scans use preset options, limiting exact shape matches. Extreme builds need approximation. Full-body data is unused. This restricts body customization. - Clothing and Accessories Not Carried Over
Scanned clothing or accessories are ignored, requiring separate recreation. Only the face mesh applies. Costumes donโt transfer. This demands extra modeling. - Requires Good Input Quality
Poor scans (blurry, misaligned) yield flawed MetaHumans, needing high-quality photos. Errors like double features can occur. Success hinges on scan precision. This raises the input bar. - Turnaround Time and Cost
Photogrammetry setup and processing take time and may need costly tools for many scans. Itโs effort-intensive for large casts. Efficiency suits key characters. This limits scalability.
Photogrammetry offers a strong likeness base, but manual work refines hair, textures, and details. Itโs a starting point, not a complete solution.
How do I match scanned facial details with Metahuman Creator customization tools?
After Mesh to MetaHuman, refine the character in MetaHuman Creator to match the scan closely. Hereโs how:
- Using Sculpt, Move, and Blend Modes
Enable editing and use Sculpt or Move to tweak jaw sharpness or eye corners subtly. Blend mode borrows traits from presets if needed. Adjustments refine the scanโs shape. This corrects minor solve errors. - Reference the Original Scan or Photos
Compare the MetaHuman to scan photos side by side to spot differences like nose width. Adjust sliders or sculpt to match reference accurately. This ensures visual fidelity. Reference keeps tweaks on target. - Adjusting Skin Details
Add freckles or wrinkles with presets and sliders if the scan texture is lost. Age controls enhance depth for older faces. Unique moles may need texture edits later. This mimics scan skin traits. - Symmetry vs. Asymmetry
Preserve scan asymmetry despite symmetric tools, using Blend for slight offsets. Adjust in Unreal if critical asymmetry fades. Most asymmetry carries over. This retains natural quirks. - Matching Eyes
Set iris color with the picker, matching the personโs eyes from photos. Adjust occlusion settings in Unreal for realism. Choose similar eyebrow shapes. This captures eye impression. - Mouth and Teeth
Adjust lip color; teeth use defaults with minor tweaks, not scan specifics. Animation refines smile likeness later. Customization is limited. This approximates oral details. - Ear and Other Details
Match ear shape with presets or sculpt, and set skin tone precisely. Small ear tweaks affect profile accuracy. Scan guides these choices. This finalizes feature alignment. - Clothing and Context
Dress the MetaHuman similarly to the person for recognition, though not facial. Clothing options enhance overall likeness. Itโs a contextual boost. This aids visual identity. - Donโt Overdo It
Make small tweaks to avoid uncanny valley, checking against reference often. Over-adjustment risks losing likeness. Subtlety preserves accuracy. This balances refinement.
Creator tools nudge the MetaHuman to match the scanโs details precisely. Careful adjustments ensure a specific, realistic resemblance.

Can I scan a full body and convert it to a complete Metahuman character?
Currently, you cannot directly scan a full body and convert it into a complete MetaHuman character. The Mesh to MetaHuman feature and MetaHuman Creator focus primarily on the head and face, using a head scan to produce a rigged MetaHuman head attached to a preset body. Full-body scans are not utilized in this process, meaning the body shape and proportions are not derived from your scan. Instead, you must select a pre-made body type that approximates the person’s physique.
While MetaHuman Creator allows some customization of body build (e.g., torso length, arm bulk), it operates within a limited range, making it challenging to capture unique or extreme body shapes. For clothing and body texture, MetaHuman provides default options, but scanned outfits or skin details like tattoos must be manually recreated or applied post-conversion. Advanced users can use full-body scan data as a reference to adjust the MetaHuman body or create custom morph targets, but this requires additional rigging and modeling outside the standard workflow.
Epic may expand MetaHuman to support body scanning in the future, but as of early 2025, it remains head-focused. For now, the process involves scanning the head for the face, selecting a preset body, and manually tweaking it to resemble the personโs physique. Full-body scans can still be useful for reference or custom modeling, but they do not integrate directly into MetaHuman. This limitation means that while the face can achieve high likeness, the body may not perfectly match the real person, especially for distinctive builds. Nonetheless, for many applications, capturing the face accurately is sufficient, as body differences are often less noticeable once the character is clothed.
How do I fix mesh errors or distortion in photogrammetry scans before conversion?
Photogrammetry scans can have various errors or distortions that need fixing to ensure a high-quality MetaHuman result. Hereโs how to address common issues:
- Geometry Distortions
Use sculpting tools (e.g., Blender, ZBrush) to reshape misshapen areas like a lumpy nose or uneven face sides. Smooth or adjust vertices to match the real face, using symmetry tools carefully to avoid losing natural asymmetry. Fix only clear scan errors, preserving unique features. This ensures the mesh accurately represents the person. - Layered or Duplicated Surfaces
Delete “ghost” surfaces or overlapping geometry caused by subject movement during scanning. Fill any resulting gaps or smooth the area to maintain a clean mesh. This prevents confusion in the MetaHuman fitting process. It ensures only accurate geometry is used. - Spike Artifacts
Remove spiky protrusions or elongated triangles, common at edges or background merges, by deleting or flattening vertices. These artifacts can mislead the MetaHuman solver. Sculpting tools help smooth these areas effectively. This avoids fitting errors. - Non-manifold Elements
Ensure the mesh is manifold by removing non-manifold edges or vertices using cleanup tools. Non-manifold geometry can cause processing issues. Most 3D software offers automatic fixes. This maintains mesh integrity. - Scale and Alignment
Check proportions against reference photos; adjust scale if the head appears too tall or wide. Ensure the mesh faces forward and is roughly 25 cm tall in Unreal. Correct scaling prevents tracking failures. This aligns the mesh for fitting. - Ear and Eye Regions
Sculpt messy ear geometry or replace with a pre-made ear mesh if necessary. For eyes, ensure a clear concave cavity and sculpt eyelids if needed. Insert placeholder eyeballs to define shapes. This aids accurate landmark detection. - Mouth Interior
Close any gaps in the lips to create a solid, closed mouth shape. Bridge the gap or fill holes to match MetaHumanโs neutral template. This prevents rigging errors. A closed mouth is essential. - Neck Seams
Smooth or cap the neck edge for a clean cutoff, aiding alignment in the identity pose. A neat neck ensures seamless body attachment. This improves the final mesh integration. Itโs a minor but helpful step.
Fixing these errors ensures the scan is clean and neutral, enhancing MetaHuman conversion quality. A well-prepared mesh reduces the need for post-conversion fixes.

What is the best way to handle textures and materials from photogrammetry scans?
Integrating photogrammetry textures into MetaHuman requires careful handling to maintain realism. Hereโs how to manage textures and materials effectively:
- Diffuse/Albedo Map Integration
Initially, the MetaHuman uses the scanโs diffuse texture, but editing in Creator replaces it with default skin materials. To retain the scanโs detail, apply it manually in Unreal by overriding the face material (e.g., M_Face). Adjust the texture to fit MetaHumanโs UV layout via projection or baking. This preserves unique skin features. - Normal Maps and Bump Detail
Bake a normal map from the high-poly scan to capture fine details like wrinkles or scars. Export the conformed MetaHuman mesh and original scan, then bake in Blender or similar tools. Apply this normal map to the MetaHuman face material, blending with existing maps. This enhances surface realism. - Roughness/Specular
MetaHumanโs default roughness works for most skin, but paint custom maps for unique areas (e.g., oily T-zone). Use the scanโs visual cues to guide adjustments. This step is optional but refines skin variation. It adds subtle realism. - Translucency/Subsurface
MetaHumanโs subsurface scattering handles skin translucency well, especially for ears. Ensure the diffuse texture is neutral to avoid clashing with the shader. No major adjustments are needed here. This maintains natural lighting effects. - Using Scan Texture Partially
Blend the scanโs texture with MetaHumanโs base for unique details like freckles while keeping overall consistency. Use masks or Photoshop to overlay specific features. This hybrid approach balances accuracy and polish. Itโs effective for key traits. - Materials for Eyes, Teeth, Hair
Rely on MetaHumanโs realistic eye and teeth materials, adjusting eye color to match the person. Clean the diffuse texture of hair artifacts around the hairline. Use MetaHumanโs hair grooms for realism. This ensures cohesive visuals. - Resolution and Quality
Downscale high-res scan textures (e.g., 8K) to 4K for performance, matching MetaHumanโs texture sizes. Use mipmaps and check UV utilization. This optimizes for real-time use. It balances detail and efficiency. - Cavity/AO Maps
Use the scanโs cavity or AO map to add depth in creases, blending with MetaHumanโs AO. This enhances shadows in nostrils or eye sockets. Itโs an optional refinement. This boosts realism subtly. - Testing and Tweaking
Test the textures under various lighting in Unreal, adjusting brightness or saturation if needed. Scan textures may appear too dark or red; brighten diffuse to match MetaHumanโs shader. This ensures consistent appearance. It finalizes the look.
Handling textures involves blending scan details with MetaHumanโs materials for optimal realism. This process preserves the personโs unique skin traits while leveraging MetaHumanโs advanced shaders.
How accurate is facial animation after converting a real human to Metahuman?
Facial animation on a MetaHuman converted from a real human scan is highly accurate, leveraging MetaHumanโs production-quality rig. The rig supports a wide range of expressions, from broad movements to subtle nuances, ensuring lifelike animation. Since the scanโs face is fitted to this rig, it inherits all its capabilities, with around 150 control parameters simulating muscle movements.
Animation accuracy depends on the method used: hand-keyframed animation relies on animator skill, while performance capture via MetaHuman Animator or ARKit achieves near-realistic fidelity by mapping the real personโs expressions directly. The rig preserves the personโs likeness in motion, though extreme or unique expressions may not be perfectly replicated. Testing with known expressions helps identify and correct any discrepancies, ensuring the MetaHumanโs animation closely matches the real person.

Can I combine photogrammetry scans with Metahuman Animator for realistic performance?
Yes, combining a photogrammetry-based MetaHuman with MetaHuman Animator creates a highly realistic digital double, capturing both appearance and performance. MetaHuman Animator uses machine learning to apply facial performances from video (e.g., iPhone or stereo camera) to any MetaHuman, including those from scans. Record the real personโs performance with a calibration pose, then import the footage into Unreal to solve and apply the animation. This captures subtle expressions, eye movements, and lip sync, ensuring the MetaHuman moves authentically. The process is largely automated, with calibration personalizing the rig to the personโs face. For full performances, combine facial animation with body motion capture. This synergy of photogrammetry and facial capture produces a lifelike digital human efficiently.
Whatโs the difference between using photogrammetry and Metahuman Creator directly?
Using photogrammetry with Mesh to MetaHuman captures a real personโs exact geometry and skin details, producing a precise likeness, especially for unique features. It requires technical effort in scanning and processing but offers high accuracy. MetaHuman Creator, by contrast, relies on blending preset faces, which may not perfectly match specific traits, though itโs faster and requires no special equipment. Photogrammetry excels for digital doubles needing exact likeness, while Creator suits quick, approximate characters. Both methods yield animation-ready MetaHumans, but photogrammetry demands more upfront work for superior fidelity. Creatorโs artistic approach can approximate many faces but may struggle with extremes or asymmetry.
How do I simulate realistic clothing and hair after converting a human scan to Metahuman?
Once you have your MetaHuman face ready, youโll want the character to have realistic hair and clothing to complete the look. Hereโs how to handle each:
- Clothing
Use MetaHumanโs default outfits if they match the personโs style, or create custom clothing in Marvelous Designer or Blender for accuracy. Rig custom clothes to the MetaHuman skeleton and apply Chaos Cloth physics for realistic movement. Add textures and materials (e.g., PBR) to match the personโs outfit. Test with animations to fix clipping or deformation issues. - Hair
Select a MetaHuman hair groom that matches the personโs style or import custom grooms from tools like Blender. For unique hairstyles, use PixelHair or create particle hair in Blender, exporting as Alembic for Unreal. Attach the groom to the head and set up physics for natural motion. Adjust materials and colors to match the personโs hair.
Simulating cloth and hair physics adds lifelike movement, enhancing realism. Custom assets ensure the MetaHuman reflects the personโs appearance accurately.

How can PixelHair be used to add custom hair to Metahumans created from real human photogrammetry scans?
PixelHair provides pre-made, realistic 3D hairstyles that can be used with MetaHumans. Hereโs how to use it:
- Choose a Hairstyle
Select a PixelHair style that closely resembles the real personโs hair, such as braids or curls. These assets offer diverse, high-quality options beyond MetaHumanโs library. They include a hair cap and textures for easy integration. This saves time on custom grooming. - Fit to Your MetaHuman Head
Import the MetaHuman head into Blender and shrinkwrap PixelHairโs scalp cap to it for a perfect fit. Adjust strands or cards to align with the hairline and partings. This ensures the hair sits naturally on the character. Itโs a quick, three-click process. - Export to Unreal
Export the hair as an Alembic groom for strand-based hair or FBX for mesh hair cards. Import into Unreal as a Groom Asset or skeletal mesh. Attach to the MetaHumanโs head bone in the blueprint. This integrates the hair seamlessly. - Apply Materials and Physics
Use PixelHairโs provided materials and adjust colors to match the personโs hair. Set up physics for strand or cloth simulation if needed. This adds realism to hair movement. It completes the custom look.
PixelHair simplifies adding unique, realistic hair to scanned MetaHumans, enhancing likeness efficiently.
What real-time performance options are available for photogrammetry-based Metahumans?
Photogrammetry-based MetaHumans can be optimized for real-time performance using several Unreal Engine features:
- LOD (Level of Detail) System
MetaHumans have multiple LODs, reducing detail with distance to save resources. Adjust LOD distances via LODSync to balance quality and performance. This ensures high detail up close and efficiency afar. Itโs essential for real-time applications. - Optimized vs Cinematic MetaHumans
Use Optimized MetaHumans for lower memory and faster rendering, with compressed textures and simplified materials. They maintain visual fidelity while reducing load times and resource use. This is ideal for games or VR. Cinematic versions suit close-ups or high-end PCs. - Scalability Settings
Lower texture quality, shadow settings, or hair complexity on lower-end hardware. Disable strand hair shadows or use simplified materials without subsurface scattering. This improves frame rates on weaker devices. Itโs crucial for broad compatibility. - Hair and Material Simplifications
Use hair card LODs or static hair for performance, and simplified skin shaders. Reduce animation complexity at lower LODs to save CPU. These tweaks maintain acceptable visuals. Theyโre key for real-time constraints.
These options ensure MetaHumans run efficiently in real-time, from high-end PCs to mobile devices.

Where can I find tutorials or courses on photogrammetry-to-Metahuman workflows?
If youโre looking to learn more or see step-by-step examples, there are several excellent tutorials and resources available, both official and community-made. Here are some places to find guides and courses on the photogrammetry-to-MetaHuman pipeline:
- Official Unreal Engine Documentation & Tutorials
Epic Developer Community Tutorials offer write-ups like “Photogrammetry to Unreal Engine 5 With Epic’s Mesh to MetaHuman Tool.” The MetaHuman Documentation has a Quick Start guide for Mesh to MetaHuman. Epicโs YouTube channel features demos and how-tos. These provide foundational knowledge. - Community Tutorials & Articles
80.lv articles, such as “Tutorial: Scanning Yourself for Mesh to MetaHuman,” offer practical tips. 3D Scan Storeโs blog details advanced techniques like texture replacement. YouTube creators like Small Robot Studio provide step-by-step videos. Forums and Reddit threads share user experiences. - Courses and Training Platforms
Look for courses on Udemy or ArtStation Learning focusing on MetaHuman and scanning. CGMA or CGWorkshops may offer character creation classes mentioning MetaHuman. Epicโs MetaHuman Hub aggregates resources and user stories. These expand learning options.
These resources cover the entire pipeline, from scanning to refining the MetaHuman, ensuring comprehensive understanding.
What are the most common mistakes when turning a real human into a Metahuman using photogrammetry?
When undertaking this workflow, people often run into similar pitfalls. Here are common mistakes and how to avoid them:
- Scanning with a Non-Neutral Expression
Capturing a smile or open mouth misaligns the rig; ensure a neutral, closed-mouth pose. This prevents animation errors. Itโs a critical capture rule. Always check the expression. - Trying to Scan Hair
Hair scans poorly, causing mesh issues; tie it back or use a cap. Add hair digitally later. This avoids cleanup headaches. Focus on the face. - Inadequate Photo Coverage
Insufficient angles lead to holes or inaccuracies; take 50โ100 photos from all views. Cover ears and under-chin areas thoroughly. This ensures a complete mesh. Itโs essential for quality. - Poor Image Quality
Blurry or inconsistently lit photos degrade the scan; use sharp, evenly lit images. Review photos for issues before processing. This prevents reconstruction errors. Quality input is key. - Scale or Alignment Errors
Incorrect scale or orientation confuses tracking; set real-world size and forward-facing alignment. Apply transforms before import. This aids accurate fitting. Itโs a simple but vital step. - Skipping Mesh Cleanup
Uncleaned scans with artifacts or bumps mislead the solver; remove floating bits and smooth noise. Fill holes and sculpt missing parts. This ensures a reliable mesh. Cleanup is non-negotiable. - Ignoring Eye Requirements
Closed or poorly defined eyes cause fitting failures; sculpt open eyes with clear sockets. Insert placeholders if needed. This is crucial for tracking. Eyes must be visible. - Over-Reliance on Auto Tracking
Auto-placed landmarks often need manual adjustment; inspect and tweak for accuracy. This improves the solve quality. Itโs a hands-on step. Precision here pays off. - Not Reauthoring Textures
Using raw scan textures without adjustment can look off; edit for neutral lighting or use MetaHumanโs materials. Apply custom textures post-conversion. This maintains skin realism. Itโs an artistic touch. - Scanned Mesh Too High Poly
Importing overly dense meshes slows or crashes Unreal; decimate to 100โ200k triangles and use OBJ. This optimizes handling. Itโs a technical necessity. Plan for import. - No Backup of Original Scan Detail
Over-cleaning removes real details; keep the high-res scan for reference or baking. This preserves subtle features. Itโs a safety net. Backup ensures flexibility. - Ignoring Post-conversion Tweaks
Skipping Creator adjustments leaves the likeness imperfect; refine features like jaw or nose. Small tweaks enhance accuracy. This finalizes the character. Itโs worth the time. - Expectation of a Perfect One-Click Result
Assuming automatic perfection leads to disappointment; expect to refine hair, textures, and details. The process requires iteration. This manages expectations. Polish is essential. - Not Planning for Hair/Accessories
Scanning with glasses or thick beards causes mesh issues; remove non-scan-friendly items. Add them digitally later. This prevents cleanup challenges. Plan the capture carefully. - Lighting/Environment Mistakes
Inconsistent backgrounds or moving lights distort the mesh; keep the environment stable. Use fill lights to avoid shadows. This ensures accurate reconstruction. Consistency is crucial.
Avoiding these mistakes, ensuring neutral expression, avoiding hair scanning, capturing sufficient quality images, cleaning the mesh, and refining post-conversion, leads to a successful MetaHuman creation. Each stage requires attention to detail for optimal results.

FAQ
- Do I need expensive equipment to create a MetaHuman from photogrammetry?
You donโt need top-end gear, a mid-range DSLR or even a modern smartphone plus plenty of sharp, well-lit shots can suffice. Free or affordable photogrammetry tools (e.g., RealityCapture, Meshroom) can process them. High-end cameras or rigs boost fidelity but arenโt mandatory. Following best practices (lighting, overlap) is key to good results. - How many photos are needed to scan a human head accurately?
Aim for 50โ150 photos, capturing a full circle at eye level, above, and below. Ensure each facial area appears in multiple overlapping shots. Avoid blurry or redundant angles, quality and coverage matter more than sheer number. Better to have extra varied images than too few. - Can I use a video instead of photos for photogrammetry?
You can grab stills from a 4K video, but beware motion blur and compression. Record slowly in good light to maximize frame sharpness. Use tools like FFmpeg to extract clear frames for processing. Expect slightly less detail than dedicated still-photo captures. - My photogrammetry scan has holes (e.g., top of the head or under the chin). Will Mesh to MetaHuman still work?
Small gaps can sometimes be tolerated, but may cause distortion in the solver. Itโs best to patch holes (e.g., in ZBrush or Blender) before conversion. If you must proceed, inspect the conformed mesh for warped features. Re-fill and re-run the solve if artifacts appear. - The MetaHuman head generated doesn’t perfectly match the scan (some features look slightly different). Why and what can I do?
The tool snaps your scan to its nearest predefined head shapes, so tiny mismatches can occur. Use Creatorโs sculpt sliders to refine features (nose width, jaw shape, etc.). Double-check landmark placements to improve solve accuracy. For extreme precision, export the conformed mesh, adjust externally, then re-import. - Will the MetaHuman created have the same eye color and other traits of the real person?
Eye color must be chosen manually in Creator since scans omit iris data. Skin texture derives from your scan but uses MetaHumanโs material system, tweak tone and roughness. Facial hair wonโt auto-transfer, youโll need to select a beard asset or add stubble textures. Adjust colors and texture parameters post-import to match the real person. - Is it possible to create a MetaHuman of a famous person from just a few photos (no full photogrammetry set)?
- Full photogrammetry needs ~50+ images, few photos wonโt suffice.
- You can project front/side shots onto a base mesh or use AI-assisted head generators.
- FaceBuilder or manual sculpting in Creator may approximate a celebrity.
- For best fidelity, 360ยฐ captures are essential; otherwise rely on manual tweaking.
- The teeth of my MetaHuman don’t look like the person’s (e.g., gap teeth or unique dental shape). Can photogrammetry capture teeth?
Photogrammetry in a closed-mouth pose wonโt capture teeth geometry. Open-mouth scans are possible but challenged by gloss and alignment issues. MetaHumans default to a standard dental mesh; unique gaps or shapes wonโt carry over. To mimic distinct teeth, you must remodel the mesh or edit textures manually. - After converting, my MetaHuman’s facial animation seems a bit off or less expressive, is this a mistake I made?
A non-neutral scan or misaligned landmarks can offset the rigโs zero pose. Double-check your neutral expression and re-position identity guides if needed. Re-run the solve and inspect blendshape ranges for proper movement. Ensure youโre using the correct animation blueprint and blendshape setup in-engine. - Can I scan a person with their body pose and use that pose in MetaHuman?
Mesh to MetaHuman only processes head geometry in a neutral, forward-facing pose. Scans of bespoke body poses wonโt transfer into the Creatorโs rig. Instead, animate or pose your MetaHuman afterward in Sequencer or external tools. For a custom static pose (e.g., crossed arms), recreate it via animation or an autorig workflow.

Conclusion
Converting a real human into a MetaHuman via photogrammetry uses detailed 3D scans and UE5โs Mesh to MetaHuman feature to generate a fully rigged, lifelike character.
Begin with high-quality, well-lit, neutral-expression scans covering all angles, then clean and optimize the mesh before running the Mesh to MetaHuman solver for an accurate facial structure match.
Since hair and fine textures often require extra work, add custom hairstyles with PixelHair and adjust skin materials or apply scanned textures to preserve unique facial details.
Use MetaHuman Creator to refine features like eye color, asymmetries, and expressions, while Unrealโs LOD and optimization tools ensure smooth real-time performance.
Drive animations with MetaHuman Animator to capture real performances, and avoid pitfalls, such as poor hair scans or expression misalignment, by meticulous scanning and cleanup.
This seamless integration of photogrammetry and MetaHuman technology empowers artists to create production-ready digital doubles for games, films, VR, and beyond, dramatically streamlining the character-creation pipeline.
Sources
- Unreal Engine Blog โ “New release brings Mesh to MetaHuman to Unreal Engine, and much more!” (June 9, 2022)โunrealengine.comโunrealengine.com โ Official announcement outlining the Mesh to MetaHuman feature and its capabilities/limitations.
- Epic Games MetaHuman Documentation โ “MetaHuman from Mesh Quick Start”โdev.epicgames.com and “Importing and Preparing a Mesh” guidelinesโdev.epicgames.comโdev.epicgames.com โ Official docs detailing how to prepare scans (neutral pose, format, eyes open) and use the plugin effectively.
- 80.lv Article โ “Tutorial: Scanning Yourself for Mesh to MetaHuman in RealityCapture” (Oct 13, 2022)โ80.lvโ80.lv โ Tutorial with tips from Capturing Reality, emphasizing neutral expression and avoiding scanning hair for best results.
- Unreal Engine Forums โ “MetaHuman – Tips & Tricks” threadโforums.unrealengine.comโforums.unrealengine.com โ Community-sourced tips for improving Mesh to MetaHuman results (cleaning mesh, using console commands to export meshes for normal map baking, etc.).
- BlenderMarket โ PixelHair Product Pageโblendermarket.comโblendermarket.com โ Description of PixelHair features and usage, confirming itโs compatible with MetaHumans and supports fitting via shrinkwrap.
- Unreal Engine Documentation โ “Optimized MetaHuman in Unreal Engine”โdev.epicgames.comโdev.epicgames.com โ Explanation of differences between Cinematic and Optimized MetaHumans in terms of performance (texture compression, LOD, memory footprint).
- Unreal Engine Blog โ “Delivering high-quality facial animation in minutes, MetaHuman Animator is now available!” (June 15, 2023)โunrealengine.comโunrealengine.com โ Official source on MetaHuman Animator, confirming it captures subtle expressions and applies them to MetaHumans accurately.
- Small Robot Studio โ “Scan Yourself into Unreal 5 | Photogrammetry & Mesh to MetaHuman Tutorial” โ YouTube video (2022) demonstrating a full workflow using a smartphone and Zephyr, referenced for practical steps (source visible in search resultsโyoutube.com).
- 3D Scan Store Blog โ “Level up your MetaHumans” (Sept 7, 2022)โ3dscanstore.comโ3dscanstore.com โ Guide on using high-res scan data to enhance MetaHuman characters, including replacing default textures with scan-derived textures for improved detail.
- Epic Developer Community Tutorial โ “Photogrammetry to Unreal Engine 5 with Mesh to MetaHuman”โdev.epicgames.com โ Community-written tutorial offering a condensed step-by-step workflow from capturing images to MetaHuman, useful for cross-reference.
These sources provide further reading and validation for the techniques and recommendations discussed, offering both official guidelines and community-driven insights into the photogrammetry-to-MetaHuman process.
Recommended
- How do I render from a specific camera in Blender?
- Unreal Engine for Beginners: A Step-by-Step Guide to Getting Started
- How do I create a birdโs-eye camera view in Blender?
- How to Create Your Alter Ego with Metahuman for the Metaverse: A Guide to Building Digital Identity in Unreal Engine 5
- Best Black Mirror Season 7 Episodes: Ranked List, Reviews, and What to Watch First
- Can you animate multiple cameras in one Blender project?
- Camera Transitions Made Easy in Blender Using The View Keeper
- The View Keeper Add-on: Why Every Blender Artist Needs It
- How The View Keeper Speeds Up Animation Rendering in Blender
- How to Get the Perfect Hair Roughness for a Realistic Look