Live Link Face is a powerful tool for bringing MetaHuman characters to life. Using an iPhone’s TrueDepth camera and Apple ARKit, this app streams your facial expressions in real time into Unreal Engine, driving the face of your MetaHuman character.
In this article, we’ll explore Live Link Face with MetaHuman – what it is, how to set it up in UE5, requirements (yes, you’ll need an iPhone with FaceID), troubleshooting tips, and best practices for both real-time performance and high-quality animation. Whether you’re a solo creator or a studio team, this guide will help you capture facial performances for MetaHumans, from virtual production or livestreaming to polished cinematic animation.
What is Live Link Face and how does it work with Metahuman?
Live Link Face, a free iOS app by Epic Games, uses an iPhone/iPad’s TrueDepth camera and ARKit to capture facial movements, streaming them as Live Link data to Unreal Engine for real-time MetaHuman animation. MetaHumans’ ARKit-compatible facial rigs with 52 blendshapes allow instant expression mirroring without retargeting. Data streams over local Wi-Fi to Unreal’s Live Link plugin, applied by the MetaHuman’s animation blueprint to its face, including head/neck rotations. It supports single or multiple performers with minimal setup, ideal for virtual production and streaming, offering low-latency, high-fidelity animation. The plug-and-play integration eliminates manual rigging, enhancing efficiency for real-time character performances in games or cinematics.

How do I set up Live Link Face with Metahuman in Unreal Engine 5?
Setting up Live Link Face with a MetaHuman in UE5 is straightforward. Here’s a step-by-step overview:
- Install and Enable Plugins: In Unreal Engine 5, navigate to Edit > Plugins and activate Live Link, Live Link Control Rig, Apple ARKit, and Apple ARKit Face Support plugins, which are essential for processing facial data streams. These plugins are often automatically enabled when a MetaHuman is imported into your project, streamlining the initial setup process. They ensure that Unreal Engine can interpret and apply the incoming facial animation data correctly. Without these, the connection between the iPhone app and the MetaHuman will not function properly.
- Add a MetaHuman to the Scene: Use Quixel Bridge or MetaHuman Creator to import a MetaHuman character and place its blueprint in your Unreal Engine level for real-time visualization. This setup allows you to see the MetaHuman respond instantly to your facial movements during testing. The MetaHuman’s pre-configured ARKit-compatible rig ensures compatibility with Live Link Face data. Positioning the character in the viewport provides immediate feedback on animation accuracy.
- Launch Live Link Face on iPhone: Download the Live Link Face app from the App Store onto your iOS device and open it to begin tracking facial movements. Ensure that your iPhone or iPad is connected to the same Wi-Fi network as your computer to enable seamless data transmission. The app leverages ARKit technology to capture detailed facial expressions in real time. Launching the app prepares it to stream data to Unreal Engine for animation.
- Find Computer’s IP Address: On your PC, locate your local IPv4 address by running ipconfig in Command Prompt (Windows) or checking Network settings (macOS) to identify the correct network interface. This IP address is crucial for linking your iPhone to Unreal Engine over the local network. Accurate entry of this address in the Live Link Face app ensures a stable connection. It facilitates direct communication between the devices for data streaming.
- Add Live Link Target in the App: In the Live Link Face app, access Settings > Live Link, tap Add Target, and input your computer’s IP address to establish a connection. Optionally, assign a custom name to the target for easier identification in projects with multiple devices. This step configures the app to send facial data to the specified computer running Unreal Engine. Once added, the app is ready to initiate streaming to the target IP.
- Start Streaming: On the app’s main screen, toggle the Live Link switch to green, indicating that facial data is actively being transmitted to Unreal Engine. A mesh overlay on your face within the app confirms that ARKit is successfully tracking your expressions. This visual feedback ensures the app is capturing and sending data as intended. The MetaHuman in Unreal should begin animating shortly after this step.
- Confirm in Unreal: In Unreal Engine 5, open Window > Virtual Production > Live Link to view the Live Link panel, where your iPhone should appear as a source (e.g., “iPhoneJane”). A green indicator next to the source name confirms that the connection is active and data is being received. This step verifies that the network link between the iPhone and Unreal Engine is functioning correctly. It ensures the pipeline is ready for real-time animation.
- Assign Live Link to MetaHuman: Select your MetaHuman in the Unreal viewport, navigate to the Details panel, and set the ARKit Face Subject to your iPhone’s name as listed in the Live Link window. Enable Use ARKit Face to activate live facial input and ensure Disable Face Tracking is unchecked to allow facial movements. This configuration binds the incoming Live Link data to the MetaHuman’s facial rig. It enables the character to mirror your expressions in real time.
- Test the Setup: With the app running and the MetaHuman configured, make facial expressions like smiling or raising your eyebrows to see the MetaHuman replicate them in the viewport. No Play mode is required, as the animation works directly in the editor, allowing immediate testing. This step confirms that the entire setup is functioning as expected. If issues arise, check network settings or plugin configurations for resolution.
If everything is set correctly, the process is essentially instant: your iPhone streams data to UE5, and the MetaHuman responds live. Once configured, the setup can be saved, requiring only app reconnection for future sessions. Take Recorder captures animations for later use, streamlining performance workflows.

Do I need an iPhone to use Live Link Face with Metahuman?
Yes, Live Link Face currently requires a compatible iPhone or iPad. The app is iOS-only and specifically needs a device with Apple’s TrueDepth front camera (the same sensor used for Face ID) to track your face. In practice, that means iPhone X or newer models (any iPhone with Face ID support, e.g. XR, XS, 11, 12, 13, 14, etc.) and certain iPads (iPad Pro 3rd generation or newer). Older iPhones (before the X) or devices without the depth-sensing front camera will not work for facial capture.
There is no Android version of Live Link Face, because Android devices lack Apple’s ARKit – the tracking technology is proprietary to iOS. Some creators have asked about using a webcam or an Android phone, but out of the box Live Link Face is an iOS-only solution. The TrueDepth sensor provides a level of fidelity and depth data that standard RGB cameras can’t easily replicate in real time, so an iPhone is the gold standard here. If you don’t have an iPhone or iPad that meets the requirements, you’d need to look at alternative face capture solutions (we’ll discuss a few later).
In summary: you do need an ARKit-compatible iPhone/iPad to drive MetaHumans with Live Link Face. The good news is even an older used iPhone X will do the trick – you can often find one relatively inexpensively, making this one of the most affordable facial mocap setups available. Once you have the device, the app itself is free to download and use.
What are the system requirements for using Live Link Face with Metahuman?
To use Live Link Face with MetaHumans, you’ll need to meet a few system requirements on both the device side and the Unreal Engine side:
- iOS Device: You need an Apple device equipped with a TrueDepth front camera, such as an iPhone X or later models, or an iPad Pro from 2018 onward, to enable ARKit facial tracking. The device should run a recent iOS version, ideally iOS 13 or higher, as the app was released in 2020 and relies on modern ARKit capabilities. Any iPhone or iPad supporting Face ID meets the ARKit requirement, ensuring accurate facial data capture. The Live Link Face app must be installed from the App Store to facilitate streaming to Unreal Engine.
- Unreal Engine 5 (or 4.26+): Your computer must be capable of running Unreal Engine 5 or version 4.26 and above smoothly, supporting both Windows 10 and macOS operating systems for MetaHuman projects. While no specialized hardware beyond UE5’s baseline is required, MetaHumans are resource-intensive, so a quad-core CPU, a modern GPU (e.g., NVIDIA GTX/RTX or AMD equivalent), and 32GB of RAM are recommended by Epic for optimal real-time performance. Lower specs may work with optimizations, but higher configurations ensure smoother handling of MetaHuman assets. This setup supports the computational demands of real-time facial animation processing.
- Network: A local network connection, typically a Wi-Fi network shared by the iOS device and computer, is essential for streaming facial data with minimal latency. For best results, use a 5 GHz Wi-Fi router or connect the PC via Ethernet to the router, ensuring both devices are on the same subnet for reliable communication. Internet access is not required; a closed local network or direct PC hotspot works fine, using UDP/TCP protocols without Bluetooth. A stable, low-latency network is critical to maintain real-time animation quality and responsiveness.
- Unreal Project Setup: Your Unreal Engine project must have key plugins enabled, including Live Link, Apple ARKit, and ARKit Face Support, which are bundled with UE5 and handle facial data streaming and control rig functionality. Enabling these plugins is a simple toggle in the Plugins menu, requiring no additional downloads. These plugins ensure that the MetaHuman’s facial rig can interpret incoming ARKit data correctly. Proper configuration of these plugins is essential for a functional animation pipeline.
- MetaHuman content: Import at least one MetaHuman into your project using Quixel Bridge or the MetaHuman Creator download, which includes pre-configured ARKit face mappings and blueprint logic for Live Link Face compatibility. No additional purchases or plugins are needed, as MetaHumans are integrated into Unreal Engine’s ecosystem. The imported MetaHuman asset is ready to receive facial data with minimal setup. This ensures immediate usability for real-time animation tasks.
In summary, requirements: an ARKit-compatible iPhone/iPad, a PC or Mac that can run UE5 and is networked to the device, and the proper software setup (Live Link Face app + UE5 with plugins and a MetaHuman). If these are in place, you have a complete pipeline for real-time facial capture.

How do I connect my iOS device to Unreal Engine for Live Link?
Here’s how to establish the Live Link connection:
- Network Check: Ensure that both your iOS device and computer are connected to the same local network, typically through the same Wi-Fi router, or the PC via Ethernet to that router. This alignment allows the devices to communicate using their IP addresses on the same subnet, which is critical for data transmission. If the PC is on a different network interface, such as a VPN, it may prevent the iPhone from detecting it. Verifying network consistency avoids common connectivity issues and ensures a smooth setup process.
- Find the PC’s IP: Determine your computer’s local IP address, such as 192.168.1.100, by checking Wi-Fi/Ethernet Status > Details on Windows or Network preferences on macOS for the active connection. On Windows, running ipconfig in Command Prompt provides a quick way to locate the IPv4 address of the correct network adapter. This IP is essential for configuring the Live Link Face app to send data to the right destination. Accurate IP identification ensures the app can establish a direct connection to Unreal Engine.
- Add Target in App: Open the Live Link Face app on your iPhone, navigate to Settings (gear icon) > Live Link, and tap Add Target to input your computer’s IP address manually. Keep the default UDP port 11111 unless you’ve modified it in Unreal Engine’s network settings for specific configurations. After entering the IP, tap Add to save the target, optionally assigning a custom name for clarity in multi-device setups. This step configures the app to stream facial data to the specified computer, readying it for transmission.
- Choose Subject (Optional): The app assigns a default subject name, often based on the device’s name (e.g., “iPhoneJohn”), which you can keep or customize for easier identification in Unreal Engine. This subject name appears in Unreal’s Live Link panel, linking the data stream to the MetaHuman’s settings. Noting or setting this name ensures you can select the correct source when configuring the MetaHuman blueprint. Consistent naming prevents confusion during multi-source or complex project setups.
- Enable Stream: On the Live Link Face app’s main screen, locate the Wi-Fi icon toggle and ensure it’s green, indicating that the app is actively broadcasting facial data to the target IP address. When enabled, the app continuously sends ARKit-tracked facial movements to Unreal Engine, requiring no further manual activation. A green status confirms the app is operational and communicating with the network. This step initiates the real-time data flow necessary for MetaHuman animation.
- Verify in Unreal: In Unreal Engine, navigate to Window > Virtual Production > Live Link to open the Live Link panel, where your iPhone should be listed as a source (e.g., Source: Live Link Face, Subject: “iPhoneJohn”). A green indicator next to the source name confirms that Unreal Engine is successfully receiving the streamed facial data. If the device appears with an active status, the connection is fully established. This verification ensures the data pipeline is ready for real-time MetaHuman animation.
The Live Link Face app automatically sends data to Unreal when streaming is active and a face is detected, requiring no additional Unreal triggers. Check Windows firewall and iOS network permissions to resolve connectivity issues, ensuring a robust, low-latency link for real-time animation.
Can I use Live Link Face with custom Metahuman characters?
Yes. Any MetaHuman character, including custom ones you create or modify, can be driven by Live Link Face. MetaHumans share a standardized facial rig and blendshape system, which is directly compatible with ARKit (Apple’s facial tracking shapes). This means whether you’re using one of the premade MetaHuman presets or a completely unique MetaHuman (even one derived from scanning your own face), the underlying face controls are the same. As long as you have the MetaHuman set up in Unreal Engine with the Live Link Face settings enabled, it will respond to the app.
“Custom MetaHuman” could refer to a MetaHuman with custom sculpting or one generated via Mesh to MetaHuman. In all these cases, the MetaHuman DNA ensures it has the needed facial rig. You do not need to do any retargeting or create new morph targets – Epic has already set up the MetaHuman to listen for the ARKit facial blendshape names that Live Link Face provides. For example, the app might send a value for “eyeBlinkLeft” or “jawOpen”; your MetaHuman has those corresponding blend shapes, so it will blink or open its jaw accordingly.
The setup process for a custom MetaHuman is the same: import the MetaHuman, place it in the level, and assign the Live Link Face subject in the Details or Blueprint. Once connected, your character (no matter how unique it looks) should mimic your facial movements.
One thing to note is if you drastically customize a MetaHuman’s facial proportions or rig (beyond what MetaHuman Creator allows), as long as you haven’t broken the ARKit mapping, it will still work – the animation might just look a bit different due to the character’s features. But out of the box, any MetaHuman you create will work with Live Link Face with minimal effort. This is a big advantage of using MetaHumans, as opposed to a completely custom rig where you would have to manually set up all the blendshape correspondences.
(If by “custom” you meant a non-MetaHuman character, the answer is more complex – you would need to set up that character with ARKit-compatible morphs or a Live Link anim blueprint. But for MetaHumans, custom or not, Live Link Face support is built-in.)

How do I enable facial animation streaming in UE5 with Live Link Face?
To enable facial animation streaming in UE5, you need to make sure both Unreal and your MetaHuman are ready to receive the data from the Live Link Face app. Here’s what to check:
- Enable the Live Link plugins: In Unreal Engine 5, go to Edit > Plugins and confirm that Live Link and Apple ARKit Face Support plugins are enabled, as they are critical for processing the facial data stream from the iPhone. These plugins are typically activated automatically when a MetaHuman is added to your project, simplifying the setup process. Without them, Unreal Engine cannot interpret the ARKit data sent by Live Link Face, halting animation functionality. Verifying their status ensures the engine is prepared to handle incoming facial movements effectively.
- Add the Live Link Source: Open the Live Link panel in Unreal Engine (Window > Virtual Production > Live Link) to ensure your iOS device is listed as an active source, indicating that streaming is enabled on the engine side. If the device appears with a green indicator, Unreal is successfully receiving the facial data stream from the Live Link Face app. Manual addition of the source is rarely needed, as the app’s automatic connection usually suffices. This step confirms that the data pipeline is operational and ready for MetaHuman animation.
- Activate in the MetaHuman blueprint: Select your MetaHuman in the Unreal viewport, access its Blueprint or Details panel, and set the Use ARKit Face option to true, then choose your iPhone’s subject name from the dropdown list. This configuration binds the incoming Live Link data stream to the MetaHuman’s facial rig, enabling real-time animation of expressions. Failing to set this correctly will prevent the MetaHuman from responding to the facial data, even if the connection is active. This step is the core of enabling live facial input for the character.
- Enable head rotation (if desired): In the MetaHuman’s settings, locate the Use ARKit Face Head option and enable it to allow the iPhone’s orientation to drive the character’s head and neck movements. If you’re using separate body tracking (e.g., a motion capture suit), disable this to avoid conflicting head animations from multiple sources. This setting enhances the naturalness of the performance by integrating head motion with facial expressions. It can be tailored to your project’s specific animation requirements for optimal results.
Once those settings are configured, facial animation streaming is live. Enabling streaming involves activating plugins, connecting the device, and setting the MetaHuman’s ARKit face input for real-time animation. The app’s status indicators and Unreal’s Live Link panel confirm active data flow.
What settings do I need in Metahuman to receive Live Link Face data?
Inside the MetaHuman’s setup, there are a couple of key settings to ensure it receives and applies the Live Link Face data:
- ARKit Face Subject Name: In the MetaHuman’s Blueprint or the Details panel when selected in the Unreal viewport, locate the ARKit Face Subject setting and choose your iPhone’s name as it appears in the Live Link window (e.g., “JohnsIPhone”). This dropdown selection ensures the MetaHuman listens to the correct Live Link data stream from the app, linking it to the facial rig. Accurate naming is essential, as any mismatch will prevent the character from animating properly. Setting this correctly establishes the connection between the iPhone’s output and the MetaHuman’s facial movements.
- Use ARKit Face (Enable Facial Live Link): Find the Use ARKit Face checkbox in the MetaHuman’s Details panel or Blueprint and enable it to activate the mapping of incoming ARKit facial data to the character’s facial rig. This toggle is the primary switch that allows the MetaHuman to respond to the Live Link Face stream, driving expressions like smiles or blinks. If left unchecked, the MetaHuman will ignore all incoming data, rendering the setup ineffective. Enabling this ensures real-time facial animation is fully operational.
- Use ARKit Face Head (optional): Check the Use ARKit Face Head option if you want the MetaHuman’s head bone to follow the iPhone’s orientation, adding natural head movements to the performance. Disable this setting if you’re using separate body animations that include head motion, such as from a motion capture suit, to prevent conflicting animations. This optional setting allows you to customize whether head rotation is driven by Live Link Face or another source. It provides flexibility to match your project’s animation needs.
- Live Link Body (if using body mocap): If combining facial and body capture, set the Live Link Body Subject and enable Use Live Link Body in the MetaHuman’s settings to integrate data from a body motion capture source, like an Xsens or Rokoko suit. For face-only setups, these settings can be ignored, as they are irrelevant to Live Link Face functionality. This configuration supports full-body performance capture when needed, enhancing the character’s overall animation. It ensures compatibility with comprehensive mocap workflows for advanced projects.

By configuring the subject name and toggles, you essentially “wire up” the character to the incoming data. Configuring these settings ensures the MetaHuman mirrors the performer’s facial movements accurately. A quick test confirms functionality, with community forums offering troubleshooting support.
How accurate is facial tracking with Live Link Face on Metahuman?
Here’s what affects accuracy:
- Facial Features and Expressions: Live Link Face tracks 52 blendshape coefficients, covering brows, eyes, cheeks, and mouth, effectively capturing broad expressions like smiles, frowns, and subtle squints for realistic MetaHuman animation. Its lip sync capabilities ensure accurate speech movements, and blinks are detected with high fidelity, enhancing character believability. The system also picks up nuanced movements, such as eyebrow raises or nostril flares, with reasonable precision. This makes it suitable for a wide range of real-time animation applications, from games to virtual production.
- TrueDepth Accuracy: The iPhone’s TrueDepth sensor measures depth changes, such as cheek bulges or eyebrow protrusions, providing a level of fidelity that outperforms standard 2D RGB cameras in facial tracking. This depth data enables the MetaHuman’s face to deform in ways that closely mimic the performer’s actual facial structure, ensuring lifelike results. The sensor’s ability to capture three-dimensional facial movements adds realism to the animation output. It significantly enhances the visual quality of MetaHuman performances in Unreal Engine.
- Calibration: The Live Link Face app includes a calibration feature that records your neutral expression to establish a baseline, improving the accuracy of tracking for your specific facial features. This process minimizes biases, such as misinterpreting a resting face as an expression, and refines jaw movement and eye blink detection. Calibration tailors the tracking to your unique face, ensuring more precise and personalized animation results. It’s a critical step for optimizing performance and reducing animation errors in MetaHuman setups.
- Limitations: Compared to Hollywood-grade multi-camera mocap, Live Link Face misses subtle micro-expressions, skin wrinkles, and intricate tongue movements, as ARKit only tracks tongue presence in a binary fashion. Fast facial motions may blur due to the 60 FPS cap, reducing precision during rapid expressions. The blendshape-based system approximates rather than simulates detailed skin dynamics, unlike professional setups. For cinematic applications requiring ultimate realism, manual cleanup or additional processing may be necessary to achieve polished results.
- Influencing Factors: Optimal tracking requires good, even lighting on your face to ensure the TrueDepth and RGB cameras can clearly detect facial features without interference. Occlusions, such as large glasses, heavy beards, or hair covering the face, can degrade tracking quality, causing jitter or loss of fidelity, particularly in the lower face. Keeping the iPhone at arm’s length or mounted steadily enhances the sensor’s ability to capture subtle depth changes. Environmental factors like distance and lighting significantly impact the overall tracking performance and animation quality.
Live Link Face provides sufficient accuracy for most projects, with calibration and optimal conditions maximizing results. For ultra-realistic cinematics, professional systems or post-processing may be needed.
Can I record facial animation from Live Link Face for later use?
Absolutely. There are two primary ways to record facial animation from Live Link Face for later playback or editing:
- Record in Unreal Engine (Take Recorder/Sequencer): Utilize Unreal Engine’s Take Recorder to capture the Live Link facial data as an animation sequence or clip, storing the ARKit blendshape curves frame by frame for precise reproduction. This method allows you to record your performance directly in the Unreal Engine environment, saving it as a reusable asset that can be played back on the MetaHuman without requiring the iPhone to be connected. The recorded animation can be edited in Sequencer, enabling trimming, layering, or combining with other animations for cinematic production. This approach is ideal for creating polished cinematics or reusable animation libraries for various MetaHuman projects.
- Record on the iPhone (within the Live Link Face app): Press the red record button in the Live Link Face app to save your facial performance as a .CSV file containing blendshape data and a synchronized .MOV video of your face, both timestamped for easy reference. These files are stored in the app’s library, accessible via iTunes, Finder, or the Files app, allowing you to capture performances anywhere without immediate Unreal Engine access. The .CSV can be imported into Unreal using tools like MetaHuman Animator or FaceIT workflows to apply the animation to a MetaHuman later. Additionally, recording while connected to Unreal can trigger Take Recorder simultaneously, providing both an in-engine take and a phone-based backup for flexibility.

In both cases, you can definitely reuse and edit the recorded animation. Recording facial animations allows creators to build animation libraries for MetaHumans, with timecode support ensuring synchronization for post-production workflows.
How do I fix connection issues between Live Link Face and Unreal Engine?
Connection issues can be frustrating – here are common problems and their solutions to get Live Link Face talking to Unreal:
- Phone doesn’t appear as source in Live Link: If your iPhone doesn’t show up in Unreal’s Live Link panel, ensure the app’s streaming toggle is green and verify that Local Network access is enabled in iOS Settings > Live Link Face. Double-check that the correct PC IP address is entered in the app’s Live Link > Add Target settings, as an incorrect IP will prevent detection. If the issue persists, try deleting and re-adding the target IP in the app to refresh the connection attempt. This step resolves most cases where the device fails to register as a source, restoring the data pipeline quickly.
- Firewall blocking communication: Windows Defender Firewall may block UDP packets if Unreal Editor isn’t allowed on private networks, often due to a missed permission prompt during initial setup. Navigate to Windows Firewall & Network Protection > Allowed Apps and confirm that Unreal Editor is permitted for private networks, or temporarily disable the firewall to test connectivity. If disabling resolves the issue, re-enable the firewall and properly configure the app’s permissions to ensure secure, ongoing communication. This ensures that the Live Link data stream reaches Unreal Engine without interruption.
- Wrong network or IP: Verify that your iPhone and PC are on the same network, sharing similar IP ranges (e.g., 192.168.0.x), as multiple network interfaces like VPNs or Ethernet/Wi-Fi combinations can misroute data. For simplicity, connect both devices to the same Wi-Fi network, avoiding guest networks or cellular data on the iPhone, which disrupts local communication. If necessary, adjust Unreal’s UDP Messaging unicast endpoint to the correct IP, though this is rarely needed for standard setups. Ensuring network alignment prevents data loss and establishes a reliable connection for streaming.
- No face tracking (source visible but character not moving): If the iPhone appears in the Live Link panel but the MetaHuman doesn’t animate, confirm that Use ARKit Face is enabled and the ARKit Face Subject name matches exactly in the MetaHuman’s Details or Blueprint settings. Check the Live Link window to ensure the subject name aligns with the app’s configuration, as mismatches prevent motion application. This issue often stems from incorrect blueprint settings rather than network problems. Correcting the subject name and enabling the toggle restores real-time facial animation functionality.
- Lag or stutter in data: Network congestion can cause lag or dropped packets, so use a dedicated router or connect the PC via Ethernet to the router while the iPhone uses Wi-Fi for a more stable link. Alternatively, enable USB tethering on the iPhone to create a direct, low-latency network connection to the PC, bypassing Wi-Fi interference. Keep the iPhone close to the router if using Wi-Fi to minimize signal degradation. Reducing network load, such as pausing other streaming activities, ensures smoother, real-time facial data transmission to Unreal Engine.
- App bugs: Software glitches in the Live Link Face app can occasionally disrupt connectivity, so try quitting and relaunching the app to reset its state and clear minor issues. Restarting Unreal Engine can also resolve temporary networking hitches within the editor. If the app never prompted for Local Network permission, reinstalling it may trigger the necessary iOS prompt, fixing rare connection failures. These steps address software-related problems, restoring reliable communication between the app and Unreal Engine for consistent performance.
In summary, start with the basics: same network, correct IP, permissions allowed. Basic checks like network alignment and permissions resolve most issues, ensuring a stable Live Link connection for real-time MetaHuman animation.
What’s the best way to sync dialogue with Live Link Face in Metahuman?
Here’s some best practices to keep your facial animation and dialogue in sync:
- Use Timecode for multi-device sync: Configure the Live Link Face app to use an external timecode source, such as a Tentacle Sync device via Bluetooth, to timestamp facial data accurately for synchronization. Ensure that audio recordings and other devices, like body mocap or cameras, share the same timecode to maintain alignment across all elements. This method guarantees frame-accurate integration when importing data into Unreal Engine or editing suites, simplifying post-production workflows. It’s particularly effective for professional setups requiring precise coordination of multiple data streams.
- Record audio together with face data: Capture your voice simultaneously with facial mocap using Unreal’s Take Recorder to record an audio track alongside the animation, ensuring inherent alignment. Alternatively, the Live Link Face app can record a video with audio from the iPhone’s microphone, serving as a reference for later alignment with high-quality audio. This approach is ideal for solo creators, as it minimizes the need for manual synchronization in post-production. The recorded .MOV file provides a reliable sync point for matching dialogue to facial movements.
- Slate or clap for manual sync: At the start of your recording, perform a distinct clap or cue in view of the iPhone’s camera to create a visible and audible sync point in both the animation and audio data. This spike in the audio waveform can be aligned with the corresponding animation frame during editing, ensuring accurate synchronization. This low-tech method is highly effective for short recordings or setups without timecode capabilities. It provides a straightforward, reliable way to match dialogue and facial movements manually.
- Playback method for pre-recorded dialogue: Have a performer listen to pre-recorded dialogue through headphones and mimic it in real time using Live Link Face to capture natural facial movements that align with the audio. Record multiple takes to refine timing, then use Unreal’s Sequencer to fine-tune the animation by adjusting frames for precise lip sync. This method ensures that expressions and mouth movements feel organic and match the dialogue’s emotional tone. It’s ideal for scenarios where the audio is fixed, allowing flexible performance capture.

Here are some additional details for the playback method:
- Use Unreal’s Lipsync or Speech Recognition plugins: Unreal Engine offers developing Lipsync and Speech Recognition plugins that can automate lip sync for pre-recorded dialogue, though they may produce less expressive results compared to live performances. These tools analyze audio to generate basic mouth movements, which can serve as a starting point for animation. Manual tweaks in Sequencer are often needed to enhance emotional nuance and accuracy. This approach suits quick drafts but requires refinement for polished outputs.
- Perform to audio: A performer can listen to the pre-recorded dialogue via headphones and act out the facial performance in real time with Live Link Face, capturing authentic expressions and lip movements. Multiple takes allow you to select the best performance or combine elements for optimal timing and emotional impact. In Sequencer, you can shift the animation slightly to align lip movements perfectly with phonemes. This method ensures a natural, performance-driven result that enhances the MetaHuman’s realism.
The best sync method depends on your setup, but leveraging timecode and simultaneous recording will save you headaches. Post-recording tweaks in Unreal refine lip movements for polished, professional results.
Can I use Live Link Face with multiple Metahuman characters at once?
Yes, you can animate multiple MetaHumans simultaneously with Live Link Face – this is commonly done in multi-actor virtual production setups. There are a few ways to handle multiple characters:
- Multiple iOS devices (One per actor/character): Assign a separate iPhone or iPad to each performer, with each device running the Live Link Face app to capture individual facial performances for distinct MetaHumans. In Unreal’s Live Link panel, each device appears as a unique source (e.g., “iPhoneJohn” and “iPadJane”), and you assign these to different MetaHuman blueprints using their respective subject names. This setup ensures that each character animates independently, reflecting its performer’s expressions in real time. It’s ideal for multi-actor scenes, such as dialogues, where multiple characters need distinct facial animations simultaneously.
- Multi-cast & Multi-user considerations: The Live Link Face app supports multicast mode, which is designed to send a single feed to multiple machines, useful for collaborative setups, but for multiple characters, you need separate iOS devices. Since one phone can only track one face at a time, multiple devices are required to drive different MetaHumans concurrently. This configuration leverages Unreal’s ability to handle multiple input streams, ensuring smooth multi-character animation. It’s tailored for professional capture environments with distinct data feeds for each actor.
- Simultaneous recording and control: Use OSC (Open Sound Control) to remotely trigger the “Record” function on all Live Link Face apps simultaneously, ensuring that multiple actors’ performances start with synchronized timecodes. This feature simplifies editing by aligning all takes from different devices, making it easier to combine facial animations in post-production. It’s particularly useful for complex scenes with multiple performers, such as group interactions on a virtual production stage. The synchronized recordings streamline workflows for multi-character animation projects.
- One device, multiple characters (mirroring): Configure multiple MetaHumans to use the same Live Link subject (e.g., “John’s iPhone”) in their blueprints, causing all characters to mirror the facial movements captured by a single iPhone. This approach is uncommon for narrative-driven characters but can be effective for crowd scenes or artistic effects, like synchronized clones or background characters. It leverages the same data stream for multiple outputs, reducing the need for additional devices. This method serves niche creative purposes where identical animations are desired across characters.
In summary, multiple MetaHumans at once is supported – each just needs its own iOS feed. The Live Link architecture was designed to handle professional capture sessions with multiple actors simultaneously, enabling live multi-character scenes.

How do I improve performance when using Live Link Face in UE5?
When using Live Link Face, “performance” can refer to both the tracking quality and the Unreal Engine frame rate. Here are tips to improve both:
- Optimize Tracking Quality: Use the Live Link Face app’s calibration feature to record your neutral expression, fine-tuning the ARKit tracking to your specific facial structure for more accurate results. Ensure your face is well-lit with even lighting, and avoid obstructions like reflective glasses or hair covering your forehead to maintain clear visibility for the TrueDepth camera. These steps significantly reduce tracking jitter and errors, leading to smoother, more reliable MetaHuman animations. Calibration and proper setup minimize the need for post-processing or animation cleanup, enhancing overall performance quality.
- Use a strong network connection: Connect your PC to the router via Ethernet and keep the iPhone on a 5 GHz Wi-Fi band to minimize latency and ensure a stable data stream between devices. Alternatively, use USB tethering to create a direct, low-latency connection, bypassing potential Wi-Fi interference from crowded networks. Avoid running bandwidth-heavy tasks, like video streaming, on the same network during capture to prevent packet loss. A robust network setup ensures that facial data reaches Unreal Engine in real time, maintaining responsive animation performance.
- Reduce Unreal Engine load: Simplify your Unreal Engine scene by disabling resource-intensive features like ray tracing or using a blank level during live capture to reduce GPU and CPU demands. Lower the MetaHuman’s Level of Detail (LOD) temporarily to decrease the complexity of hair and skin shaders, boosting frame rates without sacrificing essential animation quality. These optimizations allow your system to focus on processing Live Link data efficiently. You can apply the captured animation to a more detailed scene later for final rendering, ensuring smooth real-time performance.
- Use Blueprint optimizations: In the MetaHuman’s animation blueprint, set the Component Tick to update only when the character is rendered, reducing unnecessary processing for off-screen elements. Apply minimal smoothing filters in the Live Link Pose node to eliminate minor tracking jitter without introducing noticeable lag, preserving real-time responsiveness. Overusing filters can delay animations, so balance is key to maintaining performance. These blueprint tweaks optimize how Unreal Engine handles incoming facial data, ensuring efficient animation processing.
- Hardware considerations: Keep your iPhone plugged in during long sessions to prevent battery-related throttling, which could affect tracking stability, especially on older models. Newer iPhones with faster chips may offer slightly more consistent 60 FPS tracking, though most modern devices suffice. On your PC, close resource-heavy applications to free up CPU and GPU capacity, as MetaHuman rendering is demanding. Allocating maximum system resources to Unreal Engine supports smooth, real-time performance capture for MetaHuman animations.
- Animation filtering: After recording, apply subtle filters to the animation curves in Unreal Engine to smooth out minor tracking jitter, such as small jaw or eye twitches, without compromising responsiveness. Use a Live Link Remap Asset to selectively dampen specific curves, like blinks, for cleaner results, leveraging community tools or scripts for enhanced cleanup. Avoid excessive smoothing, as it can dull the animation’s expressiveness, making it feel less lifelike. These post-processing adjustments refine the captured data, improving the final animation quality for professional outputs.
By following these tips, you can achieve smooth and performant facial capture. Optimizing tracking and engine performance ensures smooth real-time MetaHuman animation, with network stability and scene simplification key to high frame rates.
Is Live Link Face suitable for real-time virtual production with Metahuman?
For virtual production scenarios, Live Link Face provides a fast, unobtrusive solution, enabling directors to see performances live. Here’s why it’s suitable:
- Real-time streaming: Live Link Face streams facial data with minimal latency, typically a few frames, allowing MetaHumans to animate in near real-time, which is crucial for live virtual production workflows. This immediate feedback enables directors and camera operators to visualize an actor’s facial performance on the digital character during shooting, facilitating precise blocking and creative decisions. The low-latency streaming ensures that the animation aligns closely with the performer’s actions, enhancing on-set efficiency. It’s a cornerstone for dynamic, performance-driven virtual production environments where timing is critical.
- Multi-actor support: The system supports multiple iPhones, each driving a separate MetaHuman, allowing concurrent facial animations for multi-actor scenes, such as dialogues or group interactions. Live Link can multicast these data streams to multiple Unreal Engine instances, ensuring that all team members, from animators to VFX supervisors, see the performances in real time. This scalability is essential for complex virtual production stages with several performers working simultaneously. It enables seamless integration of multiple characters into a cohesive virtual scene.
- Timecode and Sync: Live Link Face supports timecode synchronization, integrating with tools like Tentacle Sync to align facial data with body motion capture, audio, and virtual camera tracking using a master clock. This ensures that all elements of a recorded take are frame-accurate, simplifying post-production alignment of complex datasets. Timecode support is critical for professional virtual productions where multiple systems must work in harmony. It guarantees that facial animations sync perfectly with other performance components, enhancing editing efficiency.
- Quality vs. immediacy: While not as detailed as high-end facial mocap rigs, Live Link Face delivers high-quality animation sufficient for live renders and many final outputs, balancing speed and fidelity effectively. Its real-time performance is ideal for on-set visualization, allowing teams to capture 90% of the needed quality instantly, with the option to refine animations later using tools like MetaHuman Animator. This trade-off prioritizes immediacy, which is often more valuable in fast-paced virtual production environments. It supports both live and post-processed workflows, offering flexibility for various project needs.
- Simplicity and cost: Using an iPhone for facial capture eliminates the need for complex, costly rigs, requiring only a simple setup that can be mounted on an actor or held, reducing setup time and actor discomfort. This accessibility makes Live Link Face practical for smaller virtual production teams or indie projects with limited budgets, democratizing high-quality facial animation. The minimal tech investment lowers barriers, enabling more creators to incorporate facial capture into their workflows. It streamlines production without sacrificing essential performance quality, making it highly practical.

Live Link Face is highly suitable for real-time virtual production with MetaHumans, offering immediacy and sufficient fidelity. It excels in live XR productions or streaming, with options for post-production polish.
Can I use Live Link Face for livestreaming with a Metahuman avatar?
Here’s how it typically works for a livestream setup:
- Stream Setup: Run Unreal Engine with a MetaHuman in a minimal scene, such as a simple background or green-screen setup, and use OBS or similar software to capture the viewport as your streaming feed. Live Link Face drives the MetaHuman’s facial expressions in real time, creating a dynamic, animated avatar that responds instantly to your performance. This setup transforms your live stream into a visually engaging experience, with the MetaHuman acting as your digital persona. Audiences see a high-fidelity character that enhances the professionalism and appeal of your broadcast.
- Body Animation: For the MetaHuman’s body, use pre-animated idle poses or loops to keep the character lively without additional input, or integrate Live Link body mocap for more dynamic movement if available. Many streamers opt to focus solely on facial animation, simplifying the setup by using a static or minimally animated body pose, which reduces system demands. This approach emphasizes the expressiveness of the MetaHuman’s face, where most viewer attention is directed. It allows you to prioritize facial performance while maintaining an engaging, animated presence.
- Performance Tips: Use Unreal Engine in windowed fullscreen mode or a spout plugin to seamlessly feed the viewport output into your streaming software, ensuring a clean, high-quality broadcast. Adjust the MetaHuman’s eye contact settings, such as the Look-at feature, to simulate direct engagement with your audience, enhancing viewer connection. Position the iPhone on a tripod or monitor mount to keep your face clearly visible to the TrueDepth camera, even while speaking into an off-camera microphone. Incorporate Unreal Blueprint triggers to activate gestures or animations, like waves or laughs, via keypresses, adding interactivity to your stream.
- Hardware Considerations: Optimize your Unreal Engine scene to reduce GPU load, as rendering a high-quality MetaHuman while streaming is resource-intensive, requiring a powerful PC to maintain 60fps for smooth performance. Ensure your computer can handle both the real-time rendering of the MetaHuman and the encoding demands of streaming software like OBS. A robust GPU and sufficient RAM are critical to avoid frame drops or lag during live broadcasts. These optimizations ensure a professional, fluid streaming experience that showcases the MetaHuman’s realism effectively.
Live Link Face is well-suited for livestreaming, delivering a top-tier live avatar with MetaHumans. Just hit the stream button and perform – your MetaHuman will do the rest!
How do I blend facial animation from Live Link Face with body animation?
Blending facial animation from Live Link Face with body animation is seamless with MetaHumans, as their rigs separate facial blendshapes and body bones for independent control. This allows live or recorded face and body animations to combine effortlessly, using Unreal’s Sequencer or blueprints to layer animations and prioritize head rotation sources. The setup supports full performance capture or post-production blending, ensuring cohesive character performances.
Here’s how to blend animations:
- Independent Face and Body rigs: MetaHumans use distinct facial blendshapes driven by a control rig for expressions and skeletal bones for body movements, allowing independent animation of each component. Enabling Use ARKit Face in the MetaHuman’s settings applies Live Link Face data to the face without affecting body animations, such as running or gesturing, played through the skeletal rig. This separation ensures that facial expressions, like smiles or frowns, can overlay any body motion seamlessly. The MetaHuman’s design facilitates layered animation, making it easy to combine sources for a unified performance.
- Turning off head rotation if needed: If your body animation includes head movements, such as nodding, disable Use ARKit Face Head in the MetaHuman’s settings to prevent conflicts with Live Link Face’s head rotation data. Choose either the body mocap system or Live Link Face as the primary source for head motion, depending on which provides the desired accuracy for your scene. This configuration avoids unnatural overlaps, ensuring smooth head animation that complements the overall performance. It allows you to tailor the animation blend to your project’s specific requirements for realism.
- Blending in Sequencer: In Unreal’s Sequencer, combine separate tracks for facial animation (blendshape curves from Live Link Face) and body animation (skeletal movements), with the facial track automatically overriding any facial data in the body track. Use additive blending or the MetaHuman’s Face AnimBP to layer the facial animation over the body, ensuring smooth integration of expressions with movements like walking. This method allows precise control over timing and layering, ideal for cinematic sequences. Sequencer’s flexibility enables you to refine the blend for polished, cohesive character animation.
- Live scenario with two Live Links: For real-time full-body capture, stream body motion from a system like a Rokoko suit via Live Link Body and facial data from Live Link Face, setting both subjects in the MetaHuman’s blueprint. The body data drives the skeletal rig, including limbs and torso, while the facial data controls blendshapes and optionally the head, combining seamlessly in Unreal Engine. This setup supports live performance capture, where the MetaHuman reflects both body and face movements instantly. It’s ideal for virtual production or live performances requiring comprehensive animation.
- Manual adjustments: If neck movements from body and facial sources conflict, refine the MetaHuman’s animation blueprint by implementing a masked blend, selecting a bone like spine03 or neck01 to divide body and face influences. This ensures the torso follows body animation while the head and neck align with facial data, creating a natural transition. The MetaHuman’s default blueprint often handles this logic, but manual tweaks can enhance precision. These adjustments resolve any animation clashes, delivering a smooth, integrated character performance.
In practice, blending is usually smooth: the MetaHuman rig was designed for this. Sequencer and blueprint settings ensure cohesive, realistic character performances with minimal effort.

Are there alternatives to Live Link Face for animating Metahumans?
Depending on your needs or available hardware, here are some alternatives:
- MetaHuman Animator (Epic): Epic’s MetaHuman Animator processes recorded video from an iPhone or stereo headcam to generate high-fidelity facial animations for MetaHumans, capturing subtle nuances that Live Link Face may miss. Unlike Live Link Face, it’s not real-time, requiring post-processing, but delivers near film-quality results, making it ideal for cinematic projects needing polished outputs. This tool leverages advanced computation to enhance animation detail, suitable for high-end productions. It’s a powerful option for creators prioritizing visual fidelity over immediate performance capture.
- Other ARKit face capture apps: Third-party iOS apps like Face Cap, Faceware’s AI facial mocap, iFacialMocap, or Motion Live by Reallusion use ARKit to track the same 52 blendshapes as Live Link Face, offering alternative interfaces or export options. These apps can stream live to Unreal or export animations as FBX files for MetaHuman use, providing flexibility in workflows. Some, like iFacialMocap, include Unreal plugins for direct integration, similar to Live Link Face. They cater to users needing specific features or cross-software compatibility, such as Blender or Maya pipelines.
- Professional Facial Mocap Systems: Systems like Faceware or Dynamixyz (now Disney-owned) use single or multi-camera setups, often with markers or machine learning, to capture facial motion with high precision, streaming to Unreal via dedicated Live Link plugins. These systems excel in professional environments, handling occlusions and offering detailed tracking, but require significant investment and setup compared to an iPhone. MetaHumans are fully compatible, with Epic providing documentation for integration, such as Faceware workflows. They’re ideal for high-budget projects demanding maximum accuracy and robustness in complex scenarios.
- Webcam or AI-based solutions: Emerging tools like OpenSeeFace use standard webcams to track facial landmarks, while Nvidia’s Audio2Face generates animations from audio input, both requiring custom setups to drive MetaHumans in Unreal. OpenSeeFace offers an open-source, budget-friendly option, though less robust than ARKit, while Audio2Face automates lip sync for basic dialogue animation. These solutions are experimental but accessible for users without iPhones, leveraging AI to approximate facial tracking. They suit quick drafts or projects with limited hardware, though integration may demand technical expertise.
- Manual and Blendshape Libraries: Traditional animation techniques, such as keyframing with MetaHuman control rigs or using blendshape libraries like the MetaHuman Pose Asset, allow manual facial animation without mocap hardware. This labor-intensive approach offers precise artistic control, ideal for detailed lip sync or custom expressions via mouth charting techniques. It’s suitable for projects where mocap isn’t feasible or where specific stylistic choices are needed. Combining manual animation with automated tools can enhance efficiency for small-scale or highly tailored animations.
- Blender and FaceIT: For Blender users, the FaceIT addon enables retargeting of ARKit data recorded by Live Link Face or similar apps, allowing animation refinement before export to Unreal for MetaHumans. Record facial data, import the .CSV into FaceIT, tweak the animation, and export as an FBX or animation asset for Unreal integration. This pipeline suits creators comfortable with Blender’s tools, offering a flexible way to enhance mocap data. It’s particularly useful for cross-software workflows requiring detailed animation adjustments.
Each alternative has its pros/cons:
- MetaHuman Animator: Delivers top-tier fidelity through post-processed iPhone or headcam video, ideal for cinematic polish but not suited for real-time applications. It enhances recorded performances, capturing fine details like subtle lip movements for high-end results. This makes it a go-to for final renders in films or games. Its processing time is a trade-off for superior quality, appealing to perfectionists.
- Faceware/Marker-based: Captures unique details, such as cheek puffs or precise eye darts, and handles occlusions like beards, but requires complex, costly setups with cameras or markers. It’s tailored for professional studios with budgets for specialized equipment and trained crews. MetaHumans integrate seamlessly, making it viable for high-budget projects. Its precision suits blockbuster games or films needing ultimate realism.
- Other ARKit apps: Match Live Link Face’s accuracy since they use the same ARKit technology, but offer varied interfaces or export options like FBX for broader software compatibility. They provide flexibility for users working across Unreal, Blender, or Maya, with some including Unreal plugins. These apps are ideal for creators seeking specific workflow conveniences. Their functionality aligns closely with Live Link Face, with minor UI-driven advantages.
- Audio2Face/AI: Automates lip sync from audio or tracks faces via webcam, offering budget-friendly, experimental options for non-iPhone users, though less accurate than ARKit-based solutions. Integration with MetaHumans requires custom setups, often via Live Link or scripting, limiting plug-and-play ease. It’s suitable for quick drafts or dialogue-driven animations with minimal setup. Its AI-driven approach appeals to innovative, low-cost workflows.
- Manual animation: Provides full artistic control through keyframing or blendshape manipulation, allowing precise expression crafting without hardware dependency, but demands significant time and skill. It’s ideal for small projects or scenes requiring unique stylistic touches, like exaggerated expressions. This method suits animators with expertise in Unreal’s control rigs or traditional animation techniques. It’s labor-intensive but offers unmatched precision for bespoke results.
- Blender/FaceIT: Enhances ARKit data with Blender’s robust animation tools, enabling detailed tweaks before exporting to Unreal, ideal for cross-software pipelines. It supports creators who prefer Blender’s interface for refining mocap data, adding flexibility to MetaHuman workflows. This approach requires familiarity with Blender and FBX export processes. It’s a powerful option for those integrating Unreal with other 3D software.
Live Link Face remains accessible and effective, but alternatives like MetaHuman Animator or Faceware offer high-end or non-iPhone options, while ARKit apps provide flexibility.

How does Live Link Face compare to professional motion capture systems?
Live Link Face offers a cost-effective, accessible facial capture solution using an iPhone, contrasting with professional systems’ complexity and precision. It’s quick to set up, requiring only an app, but lacks the nuanced detail of high-end rigs, which capture subtle movements and handle occlusions better. Professional systems are ideal for blockbuster productions, while Live Link Face suits real-time and indie needs. Hybrid approaches can combine both for optimal workflows.
Here’s the comparison:
- Cost and Accessibility: Live Link Face is free and requires only an iPhone, which many creators already own, making it highly accessible for indie developers, streamers, or small studios starting out. Professional facial mocap systems, such as optical rigs or head-mounted cameras, cost tens of thousands of dollars and often require specialized crews for operation. This stark cost difference democratizes facial capture, allowing a broader range of creators to animate MetaHumans without significant investment. Live Link Face’s affordability and simplicity lower the barrier to entry for high-quality animation workflows.
- Setup and Ease: Setting up Live Link Face takes just minutes—launch the app, connect to Unreal Engine, and start capturing, with no need for facial markers or elaborate hardware calibration. Professional systems demand extensive preparation, including placing markers on the actor’s face, calibrating multiple cameras, and ensuring optimal lighting conditions, which can take hours. This ease of use makes Live Link Face ideal for rapid iteration, spontaneous creative sessions, or environments with limited technical resources. It supports quick, marker-free workflows that prioritize speed and accessibility over complex configurations.
- Accuracy and Fidelity: High-end professional mocap systems excel at capturing subtle facial details, such as individual wrinkles, precise tongue motions, and micro-movements of the eyes, often at higher framerates like 90-120 FPS for smoother fast-motion capture. Live Link Face, relying on ARKit’s 52 blendshapes, provides very good but less precise results, approximating expressions rather than simulating exact skin dynamics, which can soften subtle asymmetries like uneven lip movements. While sufficient for most real-time applications, it falls short of the nuanced fidelity required for blockbuster film close-ups. Professional systems remain the gold standard for ultimate realism in high-stakes productions.
- Robustness: Professional systems are more robust, handling challenging conditions like extreme facial contortions, stage makeup, or partial face occlusions (e.g., beards) by using multiple camera angles or marker-based tracking for consistent reference points. Live Link Face’s ARKit technology can struggle with obstructions like heavy facial hair, large glasses, or poor lighting, leading to jittery or degraded tracking, particularly in the lower face. This makes professional setups more reliable for complex scenarios or actors with unique facial features. Live Link Face requires careful environmental control to maintain optimal tracking performance.
- When it matters: For live applications, such as virtual production or streaming, Live Link Face’s slight fidelity loss is rarely noticeable, as audiences focus on the overall performance, and its real-time capabilities shine. In high-end film or game VFX, where every micro-expression counts, professional systems deliver the precision needed for photorealistic results, often requiring less cleanup. The choice depends on whether your project prioritizes immediacy or ultimate realism, with Live Link Face excelling in fast-paced, accessible workflows. It’s ideal for previews or projects where “good enough” meets creative goals effectively.
- Hybrid approaches: Some productions use Live Link Face for on-set previews due to its speed and wireless setup, capturing instant feedback, while recording high-quality head-cam footage for later processing with professional solvers. This hybrid method combines the immediacy of Live Link Face with the precision of pro systems, optimizing both production speed and final quality. It allows teams to visualize performances live and refine them in post for hero shots, balancing cost and fidelity. Such approaches are increasingly common in virtual production pipelines seeking efficiency and excellence.
Live Link Face is ideal for quick, affordable animation, while professional systems deliver unmatched precision for high-end needs. Combining both can optimize production pipelines effectively.

Where can I find tutorials on using Live Link Face with Metahuman in UE5?
There are plenty of resources to help you learn Live Link Face with MetaHumans:
- Epic Games Official Documentation: The Epic Developer website has step-by-step guides. A great starting point is the Animating MetaHumans with Live Link documentation, which covers setup and troubleshooting in detail. Another useful doc is Recording Face Animation on iOS Device which explains calibration, etc. Epic’s docs are thorough and up-to-date for UE5.
- Unreal Engine Forums (Epic Developer Community): The community forums have Q&A and tutorials. For example, you’ll find threads where users discuss iPhone model compatibility, network setups, and share tips. Epic staff and devs sometimes post mini-tutorials or answers (like the firewall and local network permission checklist we referenced). Search the forums for “Live Link Face MetaHuman” and you’ll get lots of hits, including community tutorials in the Learning portal.
- YouTube Tutorials: There are many video tutorials by community creators. For instance, the YouTube channel pinkpocketTV has a simple UE5 Live Link Face setup tutorial (even referenced in a Medium article by a developer). Official Unreal Engine channels also have live streams or archived videos demonstrating MetaHumans and Live Link Face. Watching a tutorial can be very helpful to see the process in action, from configuring the app to seeing the MetaHuman respond.
- Medium Articles & Blogs: Some users have written articles (on Medium or personal blogs) about their experience. The Metahumans 101 Medium post by @deaconline is an example that walks through setting up a MetaHuman in UE5. It even touches on Live Link Face and MetaHuman Animator in a practical way. Blogs often include little real-world tips that official docs might not (like how to use a “Capture Source” asset in UE5.3 for MetaHuman Animator workflow, etc.).
- Community Tutorial Sites: Websites like 80.lv, CGSociety, or RealTimeVFX feature articles or interviews with creators. These often detail MetaHuman facial animation workflows indirectly. For instance, 80.lv may cover a short film using MetaHumans. Such pieces provide tutorial-like insights.
- Blender Communities (if using FaceIT): For Blender pipelines, FaceIT documentation offers guidance on ARKit data. BlenderArtists or Reddit (r/blenderhelp) discuss iPhone facial capture workflows. These can apply to MetaHumans via exports. It’s useful for cross-software users.
- Official Unreal Online Learning: Epic’s free online learning courses may include MetaHuman content. Check for courses on virtual production or MetaHumans in the Learning Portal. A Faceware course exists, suggesting possible Live Link Face coverage.

FAQ Questions and Answers
- Is the Live Link Face app free to use?A: Live Link Face is a free app on the Apple App Store, requiring only a compatible iPhone or iPad. Epic Games provides it as a no-cost companion for Unreal Engine facial capture. No additional fees apply. It’s designed for seamless integration with UE projects.
- Can I use Live Link Face without an iPhone (e.g., an Android phone or webcam)?A: Live Link Face is iOS-only, relying on Apple ARKit’s face tracking, with no Android version. Android or webcam users must use alternative software like Faceware or “Face Mocap” apps. These are separate solutions, not compatible with Live Link Face. iPhone/iPad is required for this app.
- Which iPhone models are compatible with Live Link Face?A: iPhones with TrueDepth cameras (iPhone X, XR, XS, 11, 12, 13, 14, including Pro models) are compatible. Older iPhones (8 or earlier) and the iPhone SE (2nd gen) lack TrueDepth. iPad Pro 3rd Gen or later also works. These devices ensure accurate facial tracking.
- Does Live Link Face require Unreal Engine 5, or will it work with UE4?A: Live Link Face works with Unreal Engine 4.25 and above, including 4.26, 4.27, and all UE5 versions. MetaHumans require UE 4.26+ or UE5. For UE5 users, enabling necessary plugins ensures compatibility. It’s fully supported for modern UE projects.
- Do I need to wear any markers or special equipment on my face?A: No markers or special equipment are needed. Live Link Face uses ARKit’s markerless tracking via the iPhone’s infrared depth sensor and machine learning. Setup is quick, requiring only the device positioned in front of you. This contrasts with traditional mocap’s head rigs.
- Does Live Link Face track eye movement and blinking?A: Live Link Face tracks eye rotation, approximating gaze, and captures blinks, winks, brow movement, mouth shapes, and jaw motion. It doesn’t track exact 3D gaze or tongue position. The MetaHuman mirrors these movements accurately. ARKit blendshapes ensure comprehensive facial tracking.
- Is there any noticeable delay (latency) when using Live Link Face?A: Latency is minimal, around 2-3 frames (~50ms), feeling nearly instant for speech and expressions. Network lag may increase delay slightly. A fast network or wired connection reduces issues. The MetaHuman’s face moves almost simultaneously with the user’s.
- Does Live Link Face record audio as well?A: Live Link Face records audio in .MOV files for reference but doesn’t stream it to Unreal Engine in real time. For live performances, a separate microphone is needed. The iPhone’s microphone captures sound during recording. Plan audio setups accordingly for live use.
- Can I drive a non-MetaHuman character with Live Link Face?A: Yes, but it requires manual setup. Live Link Face sends ARKit data, which MetaHumans use natively. For custom characters, create an animation Blueprint or morph targets matching ARKit blendshapes. Live Link Remap or the Face AR Sample project aids non-MetaHuman use.
- What’s the difference between Live Link Face and MetaHuman Animator?A: Live Link Face streams real-time facial data for puppeteering, with slightly less detail. MetaHuman Animator (introduced 2023) processes iPhone or headcam video offline for high-fidelity animations, ideal for polished cinematics. Animator uses Live Link Face for recording. Both leverage iPhone cameras but serve different workflow needs.
Conclusion
Live Link Face transforms MetaHuman animation by offering real-time facial capture with just an iPhone and Unreal Engine 5. This accessible technology eliminates the need for costly equipment, empowering creators of all levels. Solo content creators can embody digital avatars for streams or videos effortlessly. Indie teams can pre-visualize scenes, driving CG characters with authentic actor performances. Major studios benefit from rapid iteration and live performance capture on set. Seamless integration with MetaHumans delivers realistic results instantly. Epic’s cutting-edge character technology ensures high-fidelity visuals across applications. This versatility supports diverse creative and professional projects. Live Link Face levels the playing field, making advanced animation achievable for all. Its user-friendly approach lets creators focus on performance and storytelling.
For real-time uses like live shows, virtual influencers, or interactive NPCs, Live Link Face shines. Its responsiveness allows improvisational performances, with MetaHumans mirroring expressions instantly. For cinematic quality, performances can be recorded and refined with precision. Pairing Live Link Face with tools like MetaHuman Animator maximizes results. A proper setup is vital: use a compatible device and enable required plugins. A reliable network and accurate facial calibration ensure optimal capture. Good lighting and a stable phone mount enhance fidelity significantly. For complex shots, blending techniques—recording, tweaking, or adding hand animation—improves outcomes. These practical steps deliver polished results across project types. Live Link Face empowers creators to achieve professional-grade animation with ease.
Finally, always test your full pipeline before you’re in a critical production situation. Do a dry run of recording audio, capturing face, blending with body animation, etc., so you know the workflow and any quirks. Once it’s all set, you’ll find this approach to facial animation remarkably empowering. It enables human emotion to breathe life into digital characters in real time – a process that’s not only efficient but also a lot of fun. Happy animating!

Sources and Citations
- Epic Games – Animating MetaHumans with Live Link (Official Documentation)dev.epicgames.comdev.epicgames.com
- Epic Games – Live Link Face iOS App Blog Announcement (2020)unrealengine.comunrealengine.com
- Epic Developer Forum – iPhone models compatible with Live Link Face (TrueDepth cameras)forums.unrealengine.com
- Epic Developer Forum – User feedback on Live Link Face vs traditional mocapforums.unrealengine.com
- Epic Games – Recording Face Animation on iOS (Unreal Engine Docs)dev.epicgames.comdev.epicgames.com
- Epic Games – MetaHuman Animator Announcement Blog (2023)unrealengine.comunrealengine.com
- FaceIT Documentation (Blender) – ARKit and Live Link Face overviewfaceit-doc.readthedocs.iofaceit-doc.readthedocs.io
- Epic Developer Forum – Networking & Troubleshooting Live Link (Firewall and Permissions)forums.unrealengine.comforums.unrealengine.com
- MacRumors – Article on Live Link Face app leveraging ARKit (overview)macrumors.comunrealengine.com
- Medium Article by deacōnline – “Metahumans 101” (Community tutorial with Live Link Face steps)medium.commedium.com
Recommended
- How to Use Clothes for MetaHuman in Animation: Best Tools, Techniques, and Workflows in Unreal Engine 5
- Downloading 3D Models from Sketchfab: A Step-by-Step Guide
- How to Add Texture to a 3D Model in Blender: A Step-by-Step Guide to Creating Stunning Materials
- Advanced Camera Switching Techniques with The View Keeper in Blender
- How to Download MetaHuman: The Ultimate Step-by-Step Guide to Accessing Digital Humans
- Can I Improve My Metahuman in Blender? A Comprehensive Guide
- Hair Cards in Blender: Complete Guide to Creating Game-Ready Hair for Characters
- Create Your Own Metahuman in Unreal Engine 5: Step-by-Step Tutorial for Beginners
- How to Export MetaHuman to Blender: Comprehensive Step-by-Step Guide
- How to Save Blender Camera Settings: Complete Guide to Managing and Reusing Camera Views in Blender