Face Estimation
- Age & Gender
- Quality
- Liveness (2D and 3D)
- Emotions
- Presence of a Mask on the Face
- State of the eyes (open/closed)
Age & Gender
Note: If you need to estimate age and gender on a video stream, see Estimation of age, gender, and emotions in the section Video Stream Processing.
For age and gender estimation create the AgeGenderEstimator
class by calling the FacerecService.createAgeGenderEstimator
method and providing the configuration file.
Currently, three configuration files are available:
age_gender_estimator.xml
- First implementation of the AgeGenderEstimator interface.age_gender_estimator_v2.xml
- Improved version of the AgeGenderEstimator interface, which provides higher accuracy of age and gender estimation given that you follow Guidelines for Cameras.age_gender_estimator_v3.xml
- Improved Age and Gender estimation algorithm, so far on Windows x86 64-bit or Linux x86 64-bit system only.
With AgeGenderEstimator
you can estimate age and gender of a captured face using AgeGenderEstimator.estimateAgeGender
. The result is the AgeGenderEstimator.AgeGender
structure that contains the number of ages (in years), age group (AgeGenderEstimator.Age
) and gender (AgeGenderEstimator.Gender
).
You can learn how to estimate Age & Gender in an image in our tutorial Estimating Age, Gender, and Emotions.
Note: Gender Estimation also can be performed through Processing Block API - see Gender Estimation.
Quality
At the moment there are two quality estimation interfaces: QualityEstimator
and FaceQualityEstimator
.
QualityEstimator
provides discrete grade of quality for flare, lighting, noise and sharpness.FaceQualityEstimator
provides quality as a single real value that aggregates sample usability for face recognition (i.e. pose, occlusion, noise, blur and lighting), which is very useful for comparing samples of one person from video tracking.
QualityEstimator
To create the QualityEstimator
object, call the FacerecService.createQualityEstimator
method by passing the configuration file. Currently, two configuration files are available:
quality_estimator.xml
– First implementation of the QualityEstimator quality estimation interface.quality_estimator_iso.xml
(recommended) – Improved version of the QualityEstimator quality estimation interface, provides higher accuracy of quality estimation.
With QualityEstimator
you can estimate the quality of a captured face using QualityEstimator.estimateQuality
. The result is the QualityEstimator.Quality
structure that contains estimated flare, lighting, noise, and sharpness level.
FaceQualityEstimator
To create the FaceQualityEstimator
object, call the FacerecService.createFaceQualityEstimator
method by passing the configuration file. Currently, there is only one configuration file available, which is face_quality_estimator.xml. With FaceQualityEstimator
you can estimate the quality of a captured face using FaceQualityEstimator.estimateQuality
. This results in a real number (the greater the number, the higher the quality), which aggregates sample usability for face recognition.
Liveness
The main purpose of liveness estimation is to prevent spoofing attacks (using a photo of a person instead of a real face). Currently, you can estimate liveness in three ways - by processing a depth map, by processing an IR image or by processing an RGB image from your camera. You can also estimate liveness using the Active Liveness, which presupposes that a user has to perform a sequence of certain actions.
You can learn how to estimate liveness of a face in our tutorial Liveness Detection.
DepthLivenessEstimator
To estimate liveness with a depth map, create the DepthLivenessEstimator
object using FacerecService.createDepthLivenessEstimator
.
The following configuration files are available:
depth_liveness_estimator.xml
– The first implementation (not recommended; used only for backward compatibility);depth_liveness_estimator_cnn.xml
– Implementation based on neural networks (recommended, used inVideoWorker
by default).
To use this algorithm, it is necessary to obtain synchronized and registered frames (color image + depth map), use a color image for face tracking / detection, and pass the corresponding depth map to the DepthLivenessEstimator.estimateLiveness
method.
To get an estimated result, call the pbio.DepthLivenessEstimator.estimateLiveness
method. You'll get one of the following results:
DepthLivenessEstimator.NOT_ENOUGH_DATA
– too many missing depth values on the depth map.DepthLivenessEstimator.REAL
– the observed face belongs to a living person.DepthLivenessEstimator.FAKE
– the observed face is taken from a photo.
IRLivenessEstimator
To estimate liveness using an infrared image from a camera, create the IRLivenessEstimator
object using the FacerecService.createIRLivenessEstimator
method. Currently, only one configuration file is available – ir_liveness_estimator_cnn.xml (implementation based on neural networks). To use this algorithm, get color frames from the camera in addition to the IR frames.
To get an estimated result, you can call the IRLivenessEstimator.estimateLiveness
method. You'll get one of the following results:
IRLivenessEstimator.Liveness.NOT_ENOUGH_DATA
– Too many missing values in the IR image.IRLivenessEstimator.Liveness.REAL
– The observed face belongs to a living person.IRLivenessEstimator.Liveness.FAKE
– The observed face is taken from a photo.
Liveness2DEstimator
To estimate liveness with an RGB map, create the Liveness2DEstimator
object using the FacerecService.createLiveness2DEstimator
method.
Currently, three configuration files are available:
liveness_2d_estimator.xml
– The first implementation (not recommended; used only for backward compatibility).liveness_2d_estimator_v2.xml
– An accelerated and improved version of the current module.liveness_2d_estimator_v3.xml
– Liveness Estimation with several additional checks such as face presence, face frontality and image quality.
Two methods can be used to obtain the evaluation result: Liveness2DEstimator.estimateLiveness
and Liveness2DEstimator.estimate
.
Liveness2DEstimator.estimateLiveness
. This method returns aLiveness2DEstimator.Liveness
object.1.1. Using the liveness_2d_estimator.xml and liveness_2d_estimator_v2.xml configurations allow to obtain one of the following results:
Liveness2DEstimator.Liveness.NOT_ENOUGH_DATA
– Not enough data to make a decision.
Liveness2DEstimator.Liveness.REAL
– The observed person belongs to a living person.
Liveness2DEstimator.Liveness.FAKE
– The observed face is taken from a photo.1.2. Using the liveness_2d_estimator_v3.xml configuration allows to obtain one of the following results:
Liveness2DEstimator.Liveness.REAL
- The observed person belongs to a living person.
Liveness2DEstimator.Liveness.FAKE
- The observed face is taken from a photo.
Liveness2DEstimator.Liveness.IN_PROCESS
- Liveness estimation can not be done.
Liveness2DEstimator.Liveness.NO_FACES
- There is no faces on the input image.
Liveness2DEstimator.Liveness.MANY_FACES
- There are more than one face on the input image.
Liveness2DEstimator.Liveness.FACE_OUT
- Observed face is out of the input image boundaries.
Liveness2DEstimator.Liveness.FACE_TURNED_RIGHT
- Observed face is not frontal and turned right.
Liveness2DEstimator.Liveness.FACE_TURNED_LEFT
- Observed face is not frontal and turned left.
Liveness2DEstimator.Liveness.FACE_TURNED_UP
- Observed face is not frontal and turned up.
Liveness2DEstimator.Liveness.FACE_TURNED_DOWN
- Observed face is not frontal and turned down.
Liveness2DEstimator.Liveness.BAD_IMAGE_LIGHTING
- Input image have bad lighting conditions.
Liveness2DEstimator.Liveness.BAD_IMAGE_NOISE
- Input image is too noisy.
Liveness2DEstimator.Liveness.BAD_IMAGE_BLUR
- Input image is too blurry.
Liveness2DEstimator.Liveness.BAD_IMAGE_FLARE
- Input image is too flared.
2.Liveness2DEstimator.estimate
. This method returns a Liveness2DEstimator.LivenessAndScore
object that contains the following fields:
liveness
- Object of theLiveness2DEstimator.Liveness
class/structure (see above).score
– a numeric value in the range from 0 to 1 indicating the probability that the face belongs to a real person (forliveness_2d_estimator.xml
only 0 or 1).
Both methods take a RawSample
object as output. Examples are available in the demo sample (C++/C#/Android).
Note: Liveness Estimation also can be performed through Processing Block API - see 2D RGB Liveness Estimation.
Timing Characteristics (ms)
Version | Core i7 4.5 ГГц (Single-Core) | Google Pixel 3 |
liveness_2d_estimator.xml | 250 | 126 (GPU) / 550 (CPU) |
liveness_2d_estimator_v2.xml | 10 | 20 |
Quality metrics
Dataset | TAR@FAR=1e-2 |
CASIA Face Anti-spoofing | 0.99 |
Active Liveness
This type of liveness estimation presupposes that a user needs to perform certain actions. For example, "turn the head", "blink", etc. Estimation is performed through the VideoWorker
object based on the video stream. See more detailed description in Video Stream Processing.
Emotions
Note: If you need to estimate emotions on a video stream, see Estimation of age, gender, and emotions in the section Video Stream Processing.
To estimate emotions, create the EmotionsEstimator
object using FacerecService.createEmotionsEstimator
and pass the configuration file.
Currently, there are two available configuration files:
- emotions_estimator.xml - allows estimate four emotions:
happy
,surprised
,neutral
,angry
. - emotions_estimator_v2.xml - allows estimate seven emotions:
happy
,surprised
,neutral
,angry
,disgusted
,sad
,scared
.
With the EmotionsEstimator
object you can estimate the emotion of a captured face using the EmotionsEstimator.estimateEmotions
method.
The result is a vector with the EmotionsEstimator.EmotionConfidence
elements which contain emotions with a confidence value.
Note: Emotion Estimation also can be performed through Processing Block API - see Emotion Estimation.
FaceAttributesEstimator
This class is a universal module for evaluation of face attributes. To get the score, call the FaceAttributesEstimator.estimate(RawSample)
method. The evaluation result is an Attribute
object that contains the following attributes:
score
– The probability that a person has the required attribute, a value from0
to1
(if the value is set to-1
, then this field is not available for the specified type of assessment).verdict
– The probability that a person has the required attribute, boolean value (true
/false
).mask_attribute
– An object of the class/structureFaceAttributesEstimator.FaceAttributes.Attribute
, which contains the following values:NOT_COMPUTED
– No estimation made.NO_MASK
– Face without a mask.HAS_MASK
– Face with a mask.
left_eye_state
,right_eye_state
– objects of the class/structureFaceAttributesEstimator.FaceAttributes.EyeStateScore
, which contains thescore
attribute and theEyeState
structure with the following fields:NOT_COMPUTED
– No estimation is made.CLOSED
– The eye is closed.OPENED
– The eye is open.
Presence of a Mask on the Face
To check the presence of a mask on the face, use the FaceAttributesEstimator
in conjunction with the face_mask_estimator.xml
configuration file. This returns the score
, verdict
, mask
attributes in the Attribute
object.
Improved mask estimation algorithm available with the configuration file "face_mask_estimator_v2.xml"
,
so far on Windows x86 64-bit or Linux x86 64-bit system only.
State of the Eyes (Open/Closed)
To check the state of the eyes (open/closed), use the FaceAttributesEstimator
in conjunction with the eyes_openness_estimator_v2.xml
configuration file. This returns the left_eye_state
, right_eye_state
attributes in the Attribute
object.