What is Liveness Detection?
Liveness Detection is a security technology used to verify that the person behind the camera is a real, live human being. By analyzing depth, texture, and natural movement, it ensures the user is present in real-time and prevents fraud attempts using photos, videos, or masks.How it works
Liveness detection uses Deep Neural Networks (AI) to distinguish between a live human being and a fake representation (spoof). Unlike simple face matching — which only compares features — liveness detection analyzes the quality and physics of the image to ensure the user is physically present. It works by analyzing three key layers of data:- 3D Depth & Perspective — Real faces are 3D objects with volume. The system analyzes how facial features adjust to distance and lighting, easily spotting flat surfaces like phone screens or printed photos that lack natural depth.
- Skin Texture & Reflections — Human skin absorbs and scatters light in a unique way. The AI can detect the difference between real skin and the artificial reflection (“glare”) typical of plastic masks, glossy paper, or glass screens.
- Digital Artifacts & Micro-patterns — The system scans for microscopic visual “noise” that the human eye often misses. This includes pixelation from digital enlargements, Moire patterns (wavy lines that appear when taking a photo of a screen), or unnatural borders typical of deepfakes and masks.
Types of Spoofing Attacks
Fraudsters use “Presentation Attacks” (PAD) to trick biometric systems. These are categorized into two main groups.2D Spoofing (Flat Fakes)
The most common and easiest type of attack. Criminals use flat surfaces to impersonate a face, which lacks the natural depth and 3D volume of a real human.- Printed Photos — High-resolution photographs held in front of the camera.
- Screen Replays — Displaying a photo or playing a pre-recorded video on a smartphone, tablet, or monitor.
- Paper Masks — Photos of faces with eyes and mouth cut out, worn by the attacker.
- Deepfakes — AI-generated videos that mimic facial expressions, displayed on a screen.
3D Spoofing (Physical Fakes)
Sophisticated attacks where fraudsters create physical objects to mimic human depth and volume.- 3D Masks — Custom-made masks from silicone, latex, or hard plastic.
- 3D Printed Models — Hard resin busts or statues printed from a 3D scan of a face.
- Mannequins & Dolls — Realistic wax or silicone heads designed to look human.
Liveness Detection Methods
Liveness checks work separately on Face and Document steps. You can enable them individually to suit your specific flow.
Passive Liveness
Best for: Maximum conversion rates and seamless user experience. Passive liveness works in the background using advanced AI. It requires no specific action from the user — they simply look at the camera, and the system analyzes the image instantly.- How it works — The user takes a standard selfie. The AI analyzes the single image for biometric signals such as skin texture, micro-reflections, and depth perception.
- Security — It detects digital artifacts (pixelation, screen lines) that reveal printed photos, screens, or masks.
- User Experience — Completely invisible to the user. No complex instructions, resulting in faster onboarding.
Active Liveness (3D)
Best for: High-risk scenarios requiring user interaction. Active liveness requires the user to perform specific, randomized actions to prove they are present in a 3D environment. This “Challenge-Response” method verifies the user is not a pre-recorded video or a deepfake.- How it works — The user follows real-time prompts to interact with the camera:
- Moving the device closer (verifies 3D volume and depth)
- Randomized head rotation (verifies distinct profile views)
- Lighting analysis (analyzing how light reflects off the skin during movement)
- Security — By requiring depth interaction (zooming/turning), this method blocks sophisticated attacks like screen replays, deepfake puppets, and realistic masks.
- User Experience — Adds a secure verification step, reassuring users that their identity is protected.
Why verification might be rejected
Factors affecting performance
Factors affecting performance
While Passive Liveness is highly advanced, certain environmental and device factors can impact performance or cause false rejections.Environmental Factors:
- Lighting Conditions — Extreme backlighting (silhouette effect) or heavy shadows can obscure facial details needed for analysis.
- Image Quality — Blur, lack of focus, or camera movement prevents the AI from analyzing skin texture.
- Positioning — Extreme angles or holding the device too far away reduces the pixel density of the face.
- “Beauty Mode” / Face Enhancers — Many smartphones enable skin smoothing or beauty filters by default. These filters remove natural skin texture, causing the AI to misinterpret the real face as a plastic mask. Users should disable these filters.
- Camera Quality — Older devices with low-resolution cameras may not capture enough detail for accurate analysis.
- Rooted/Compromised Devices — Rooted or jailbroken devices are often flagged as high-risk, which may trigger a rejection regardless of image quality.