Skip to main content
Version: 3.9.0

Face Estimation

Age & Gender

Note: If you need to estimate age and gender on a video stream, see Estimation of age, gender, and emotions in the section Video Stream Processing.

For age and gender estimation, create the AgeGenderEstimator class by calling the FacerecService.createAgeGenderEstimator method, providing the configuration file.

Currently, two configuration files are available:

  • age_gender_estimator.xml - first implementation of the AgeGenderEstimator interface
  • age_gender_estimator_v2.xml - improved version of the AgeGenderEstimator interface, which provides higher accuracy of age and gender estimation given that you follow Guidelines for Cameras

With AgeGenderEstimator you can estimate age and gender of a captured face using AgeGenderEstimator.estimateAgeGender. The result is the AgeGenderEstimator.AgeGender struct containing the number of ages (in years), age group (AgeGenderEstimator.Age) and gender (AgeGenderEstimator.Gender). See the example of using the AgeGenderEstimator in demo.cpp.

Learn how to estimate Age & Gender in an image in our tutorial Estimating Age, Gender, and Emotions.

Quality

At the moment, there are two quality estimation interfaces: QualityEstimator and FaceQualityEstimator.

  • QualityEstimator provides discrete grade of quality for flare, lighting, noise and sharpness.
  • FaceQualityEstimator provides quality as a single real value that aggregates sample usability for face recognition (i.e. pose, occlusion, noise, blur and lighting), which is very useful for comparing samples of one person from video tracking.

QualityEstimator

To create the QualityEstimator object, call the FacerecService.createQualityEstimator method by passing the configuration file. Currently, two configuration files are available:

  • quality_estimator.xml – first implementation of the QualityEstimator quality estimation interface
  • quality_estimator_iso.xml (recommended) – improved version of the QualityEstimator quality estimation interface, provides higher accuracy of quality estimation

With QualityEstimator you can estimate the quality of a captured face using QualityEstimator.estimateQuality. The result is the QualityEstimator.Quality structure that contains estimated flare, lighting, noise, and sharpness level.

See the example of using the QualityEstimator in demo.cpp.

FaceQualityEstimator

To create the FaceQualityEstimator object, call the FacerecService.createFaceQualityEstimator method by passing the configuration file. Currently, there is only one configuration file available, which is face_quality_estimator.xml. With FaceQualityEstimator you can estimate the quality of a captured face using FaceQualityEstimator.estimateQuality. This results in a real number (the greater it is, the higher the quality), which aggregates sample usability for face recognition. See the example of using the FaceQualityEstimator in demo.cpp.

Liveness

The main purpose of liveness estimation is to prevent spoofing attacks (using a photo of a person instead of a real face). Currently, you can estimate liveness in one of three ways - by processing a depth map, by processing an IR image or by processing an RGB image from your camera. You can also estimate liveness using the Active Liveness, which presupposes that a user has to perform a sequence of certain actions.

Learn how to estimate liveness of a face in our tutorial Liveness Detection.

DepthLivenessEstimator

To estimate liveness with a depth map, you should create the DepthLivenessEstimator object using FacerecService.createDepthLivenessEstimator.

The following configuration files are available:

  • depth_liveness_estimator.xml – the first implementation (not recommended; used only for backward compatibility);
  • depth_liveness_estimator_cnn.xml – implementation based on neural networks (recommended, used in VideoWorker by default).

To use this algorithm, it is necessary to obtain synchronized and registered frames (color image + depth map) and use a color image for face tracking / detection and to pass the corresponding depth map to the DepthLivenessEstimator.estimateLiveness method.

To get an estimated result, you can call the pbio.DepthLivenessEstimator.estimateLiveness method. You will get one of the following results:

  • DepthLivenessEstimator.NOT_ENOUGH_DATA – too many missing depth values on the depth map.
  • DepthLivenessEstimator.REAL – the observed face belongs to a living person.
  • DepthLivenessEstimator.FAKE – the observed face is taken from a photo.

IRLivenessEstimator

To estimate liveness using an infrared image from a camera, you should create the IRLivenessEstimator object using the FacerecService.createIRLivenessEstimator method. Currently, only one configuration file is available – ir_liveness_estimator_cnn.xml (implementation based on neural networks). To use this algorithm, you have to get color frames from the camera in addition to the IR frames.

To get an estimated result, you can call the IRLivenessEstimator.estimateLiveness method. You will get one of the following results:

  • IRLivenessEstimator.Liveness.NOT_ENOUGH_DATA – too many missing values in the IR image.
  • IRLivenessEstimator.Liveness.REAL – the observed face belongs to a living person.
  • IRLivenessEstimator.Liveness.FAKE – the observed face is taken from a photo.

Liveness2DEstimator

To estimate liveness with an RGB map, you should create the Liveness2DEstimator object using the FacerecService.createLiveness2DEstimator method. Currently, two configuration files are available:

  • liveness_2d_estimator.xml – the first implementation (not recommended; used only for backward compatibility)
  • liveness_2d_estimator_v2.xml – an accelerated and improved version of the current module, recommended

Two methods can be used to obtain the evaluation result:

  • Liveness2DEstimator.estimateLiveness. This method returns a Liveness2DEstimator.Liveness object. The result will be one of the following:
    • Liveness2DEstimator.Liveness.NOT_ENOUGH_DATA – not enough data to make a decision
    • Liveness2DEstimator.Liveness.REAL – the observed person belongs to a living person
    • Liveness2DEstimator.Liveness.FAKE – the observed face is take from a photo
  • Liveness2DEstimator.estimate. This method returns a Liveness2DEstimator.LivenessAndScore object that contains the following fields:
    • liveness - object of the Liveness2DEstimator.Liveness class/structure (see above)
    • score – the probability that a face belongs to a living person (for liveness_2d_estimator.xml this field is not available, a value of 0 or 1 is returned depending on the value of the liveness attribute)

Both methods take a RawSample object as output. Examples are available in the demo sample (C++/C#/Android).

Note: the LivenessEstimator object in Face SDK C++/C#/Java API is deprecated.

Timing Characteristics (ms)

VersionCore i7 4.5 ГГц (Single-Core)Google Pixel 3
liveness_2d_estimator.xml250126 (GPU) / 550 (CPU)
liveness_2d_estimator_v2.xml1020

Quality metrics

DatasetTAR@FAR=1e-2
CASIA Face Anti-spoofing0.99

Active Liveness

This type of liveness estimation presupposes that a user has to perform certain actions, for example, "turn your head", "blink", etc. Estimation is performed through the VideoWorker object based on the video stream. See more detailed description in Video Stream Processing.

Emotions

Note: If you need to estimate emotions on a video stream, see Estimation of age, gender, and emotions in the section Video Stream Processing.

To estimate emotions, create the EmotionsEstimator object using FacerecService.createEmotionsEstimator and pass the configuration file. Currently, there is only one configuration file, which is emotions_estimator.xml.
With the EmotionsEstimator object you can estimate the emotion of a captured face using the EmotionsEstimator.estimateEmotions method. The result is a vector with the EmotionsEstimator.EmotionConfidence elements containing emotions with a confidence value. See the example of using the EmotionsEstimator object in demo.cpp.

FaceAttributesEstimator

This class is a universal module for evaluating face attributes. To get the score, call the FaceAttributesEstimator.estimate(RawSample) method. The result of the evaluation is an Attribute object that contains the following attributes:

  • score – the probability that a person has the required attribute, a value from 0 to 1 (if the value is set to -1, then this field is not available for the specified type of assessment)

  • verdict – the probability that a person has the required attribute, boolean value (true/false)

  • mask – an object of the class/structure FaceAttributesEstimator.FaceAttributes.Attribute, which contains the following values:

    • NOT_COMPUTED – no estimation was made
    • NO_MASK – face without a mask
    • HAS_MASK – face with a mask
  • left_eye_state, right_eye_state – objects of the class/structure FaceAttributesEstimator.FaceAttributes.EyeStateScore, which contains the score attribute and the EyeState structure with the following fields:

    • NOT_COMPUTED – no estimation was made
    • CLOSED – the eye is closed
    • OPENED – the eye is open

Presence of a Mask on the Face

To check the presence of a mask on the face, use the FaceAttributesEstimator in conjunction with the face_mask_estimator.xml configuration file. This returns the score, verdict, mask attributes in the Attribute object.

State of the Eyes (Open/Closed)

To check the state of the eyes (open/closed), use the FaceAttributesEstimator in conjunction with the eyes_openness_estimator.xml configuration file. This returns the attributes left_eye_state, right_eye_state in the Attribute object.