Skip to main content
Version: 3.22.0 (latest)

Face estimation

info

Face SDK provides Processing Block API - a new scalable interface designed to replace existing API in future. The API presented in this section is scheduled to end in 2024.

Age and gender

For age and gender estimation follow the steps below:

  1. Create the AgeGenderEstimator object by calling the FacerecService.createAgeGenderEstimator method and specify a name of configuration file as an argument.

Currently, three configuration files are available:

  • age_gender_estimator.xml: the first implementation of the AgeGenderEstimator interface;
  • age_gender_estimator_v2.xml: the improved version of the AgeGenderEstimator interface, which provides higher accuracy of age and gender estimation given that you follow Guidelines for Cameras;
  • age_gender_estimator_v3.xml: the improved age and gender estimation algorithm, available on Windows x86 64-bit, Linux x86 64-bit and Android systems.
  1. To estimate age and gender of a captured face, use AgeGenderEstimator.estimateAgeGender method. The method will return the AgeGenderEstimator.AgeGender structure that contains gender (AgeGenderEstimator.Gender), age group (AgeGenderEstimator.Age) and age in years (float number).

Available age groups:

  • KID (0-18);
  • YOUNG (18-37);
  • ADULT (37-55);
  • SENIOR (55+).
// create AgeGenderEstimator object
const pbio::AgeGenderEstimator::Ptr age_gender_estimator = service->createAgeGenderEstimator("age_gender_estimator.xml");

// detect faces
std::vector<pbio::RawSample::Ptr> samples = capturer->capture(image);

for(size_t i = 0; i < samples.size(); ++i)
{
// estimate age & gender
const pbio::AgeGenderEstimator::AgeGender age_gender = age_gender_estimator->estimateAgeGender(*samples[i]);
}
See the example of using the AgeGenderEstimator in demo.cpp.

You can learn how to estimate Age & Gender in an image in our tutorial.

info

To estimate age & gender through Processing Block API, see Face Estimation.

Emotions

To estimate emotions in the face image, follow the steps below:

  1. Create the EmotionsEstimator object using FacerecService.createEmotionsEstimator and pass the configuration file as an argument.

Currently, there are two available configuration files:

  • emotions_estimator.xml: allows estimating four emotions: happy, surprised, neutral, angry.
  • emotions_estimator_v2.xml: allows estimating seven emotions: happy, surprised, neutral, angry, disgusted, sad, scared.
  1. To estimate emotions of a captured face call the EmotionsEstimator.estimateEmotions method. The result is an array of elements of type EmotionsEstimator.EmotionConfidence which contains the emotion name and the confidence value.
// create EmotionsEstimator object
const pbio::EmotionsEstimator::Ptr emotions_estimator = service->createEmotionsEstimator("emotions_estimator.xml");

// detect faces
std::vector<pbio::RawSample::Ptr> samples = capturer->capture(image);

for(size_t i = 0; i < samples.size(); ++i)
{
// estimate emotions
const std::vector<pbio::EmotionsEstimator::EmotionConfidence> emotions = emotions_estimator->estimateEmotions(*samples[i]);
}
See the example of using the EmotionsEstimator in demo.cpp.
info

To estimate emotions through Processing Block API, see Emotion Estimation.

tip

If you need to estimate age, gender and emotions on a video stream, see Estimation of age, gender, and emotions in the section Video Stream Processing.

Quality

At the moment there are two quality estimation classes: QualityEstimator and FaceQualityEstimator.

  • QualityEstimator provides discrete grade of quality for flare, lighting, noise and sharpness.
  • FaceQualityEstimator provides quality as a single real value that aggregates sample usability for face recognition (i.e. pose, occlusion, noise, blur and lighting), which is very useful for comparing quality of images from video tracking.
info

To estimate image quality through Processing Block API, see Quality Assessment section.

QualityEstimator

  1. Create the QualityEstimator object by calling the FacerecService.createQualityEstimator method and specify the configuration file as an argument. Currently, two configuration files are available:

    • quality_estimator.xml: the first implementation of the QualityEstimator quality estimation interface.
    • quality_estimator_iso.xml (recommended): the improved version of the QualityEstimator quality estimation interface, provides higher accuracy of quality estimation.
  2. To estimate quality of a captured face, use QualityEstimator.estimateQuality. The method returns the QualityEstimator.Quality structure that contains estimated flare, lighting, noise, and sharpness level.

// create QualityEstimator object
const pbio::QualityEstimator::Ptr quality_estimator = service->createQualityEstimator("quality_estimator_iso.xml");

// detect faces
std::vector<pbio::RawSample::Ptr> samples = capturer->capture(image);

for(size_t i = 0; i < samples.size(); ++i)
{
// estimate quality
const pbio::QualityEstimator::Quality quality = quality_estimator->estimateQuality(*samples[i]);
}
See the example of using the QualityEstimator in demo.cpp.

FaceQualityEstimator

  1. Create the FaceQualityEstimator object by calling the FacerecService.createFaceQualityEstimator method. Pass the face_quality_estimator.xml configuration file as an argument.

  2. To estimate the quality of a captured face, use FaceQualityEstimator.estimateQuality method. This results in a real number that can also be negative (the greater the number, the higher the quality), which aggregates flare, lighting, noise, and sharpness.

// create FaceQualityEstimator object
const pbio::FaceQualityEstimator::Ptr face_quality_estimator = service->createFaceQualityEstimator("face_quality_estimator.xml");

// detect faces
std::vector<pbio::RawSample::Ptr> samples = capturer->capture(image);

for(size_t i = 0; i < samples.size(); ++i)
{
// estimate quality
const pbio::FaceQualityEstimator::Quality face_quality = face_quality_estimator->estimateQuality(*samples[i]);
}
See the example of using the FaceQualityEstimator in demo.cpp.

Liveness

Liveness technology is widely used to prevent spoofing attacks using a printed face image, a photo or video of a face from the screens of mobile devices and monitors, as well as various kinds of masks (paper, silicone, etc.).

Currently, you can estimate liveness in three ways - by processing a depth map, by processing an IR image or by processing an RGB image from your camera.

You can also estimate liveness using the Active Liveness, which presupposes that a user has to perform a sequence of certain actions.

To learn how to estimate face liveness, see our tutorial Liveness Detection.

info

Liveness technology works correctly only with raw data, i.e. images received from the camera. If the image was edited using external software before submission to Liveness Estimator (for example, subjected to retouching), the correct liveness operation is not guaranteed, which is regulated by ISO/IEC 30107-1:2016.

note

To estimate liveness using Processing Blocks API, see Liveness Estimation.

DepthLivenessEstimator

  1. To estimate liveness with a depth map, create the DepthLivenessEstimator object using FacerecService.createDepthLivenessEstimator. Pass one of the available configuration files as an argument:
  • depth_liveness_estimator.xml: the first implementation (not recommended; used only for backward compatibility).
  • depth_liveness_estimator_cnn.xml: implementation based on neural networks (recommended, used in VideoWorker by default).
  1. Call the DepthLivenessEstimator.estimateLiveness method and pass sample and depth_map as arguments. To use this algorithm, it is necessary to obtain synchronized and registered frames (color image + depth map) and use a color image for face tracking / detection.

You'll get one of the following results:

  • DepthLivenessEstimator.NOT_ENOUGH_DATA: too many missing depth values on the depth map.
  • DepthLivenessEstimator.REAL: the observed face belongs to a real person.
  • DepthLivenessEstimator.FAKE: the observed face is taken from a photo.
// create DepthLivenessEstimator object
const pbio::DepthLivenessEstimator::Ptr depth_liveness_estimator = service->createDepthLivenessEstimator("depth_liveness_estimator_cnn.xml");

// detect faces
std::vector<pbio::RawSample::Ptr> samples = capturer->capture(image);

for(size_t i = 0; i < samples.size(); ++i)
{
// estimate liveness
const pbio::DepthLivenessEstimator::Liveness depth_liveness = depth_liveness_estimator->estimateLiveness(sample, depth_map);
}

IRLivenessEstimator

  1. To estimate liveness using an infrared image from a camera, create the IRLivenessEstimator object using the FacerecService.createIRLivenessEstimator method. Currently, only one configuration file is available – ir_liveness_estimator_cnn.xml (implementation based on neural networks). To use this algorithm, you need to get color frames from the camera in addition to the IR frames.

  2. To get an estimated result, call the IRLivenessEstimator.estimateLiveness method. Pass sample and ir_frame as arguments. The method will return one of the following results:

    • IRLivenessEstimator.Liveness.NOT_ENOUGH_DATA: too many missing values in the IR image.
    • IRLivenessEstimator.Liveness.REAL: the observed face belongs to a real person.
    • IRLivenessEstimator.Liveness.FAKE: the observed face is taken from a photo.
// create IRLivenessEstimator object
const pbio::IRLivenessEstimator::Ptr ir_liveness_estimator = service->createIRLivenessEstimator("ir_liveness_estimator_cnn.xml");

// detect faces
std::vector<pbio::RawSample::Ptr> samples = capturer->capture(image);

for(size_t i = 0; i < samples.size(); ++i)
{
// estimate liveness
const pbio::IRLivenessEstimator::Liveness ir_liveness = ir_liveness_estimator->estimateLiveness(sample, ir_frame);
}

Liveness2DEstimator

  1. To estimate liveness with an RGB map, create the Liveness2DEstimator object using the FacerecService.createLiveness2DEstimator method.

Currently, three configuration files are available:

  • liveness_2d_estimator.xml: the first implementation (not recommended; used only for backward compatibility).
  • liveness_2d_estimator_v2.xml: an accelerated and improved version of the current module.
  • liveness_2d_estimator_v3.xml: liveness estimation with several additional checks such as face presence, face frontality and image quality.
  1. Two methods can be used to obtain the evaluation result: Liveness2DEstimator.estimateLiveness and Liveness2DEstimator.estimate.
  • Liveness2DEstimator.estimateLiveness. This method returns a Liveness2DEstimator.Liveness object.

    • Using liveness_2d_estimator.xml and liveness_2d_estimator_v2.xml configurations allow obtaining one of the following results:

      • Liveness2DEstimator.Liveness.NOT_ENOUGH_DATA: not enough data to make a decision.
      • Liveness2DEstimator.Liveness.REAL: the observed person belongs to a real person.
      • Liveness2DEstimator.Liveness.FAKE: the observed face is taken from a photo.
    • Using liveness_2d_estimator_v3.xml configuration allows obtaining one of the following results:

    • Liveness2DEstimator.Liveness.REAL: the observed person belongs to a real person.
    • Liveness2DEstimator.Liveness.FAKE: the observed face is taken from a photo.
    • Liveness2DEstimator.Liveness.IN_PROCESS: liveness estimation can not be done.
    • Liveness2DEstimator.Liveness.NO_FACES: there is no faces on the input image.
    • Liveness2DEstimator.Liveness.MANY_FACES: there are more than one face on the input image.
    • Liveness2DEstimator.Liveness.FACE_OUT: observed face is out of the input image boundaries.
    • Liveness2DEstimator.Liveness.FACE_TURNED_RIGHT: observed face is not frontal and turned right.
    • Liveness2DEstimator.Liveness.FACE_TURNED_LEFT: observed face is not frontal and turned left.
    • Liveness2DEstimator.Liveness.FACE_TURNED_UP: observed face is not frontal and turned up.
    • Liveness2DEstimator.Liveness.FACE_TURNED_DOWN: observed face is not frontal and turned down.
    • Liveness2DEstimator.Liveness.BAD_IMAGE_LIGHTING: input image have bad lighting conditions.
    • Liveness2DEstimator.Liveness.BAD_IMAGE_NOISE: input image is too noisy.
    • Liveness2DEstimator.Liveness.BAD_IMAGE_BLUR: input image is too blurry.
    • Liveness2DEstimator.Liveness.BAD_IMAGE_FLARE: input image is too flared.
  • Liveness2DEstimator.estimate. This method returns a Liveness2DEstimator.LivenessAndScore object that contains the following fields:

    • liveness: object of the Liveness2DEstimator.Liveness class/structure (see above).
    • score: a numeric value in the range from 0 to 1 indicating the probability that the face belongs to a real person (for liveness_2d_estimator.xml only 0 or 1).
// create Liveness2DEstimator object
std::vector<pbio::RawSample::Ptr> liveness_2d_estimator = service->createLiveness2DEstimator("liveness_2d_estimator_v2.xml");

// detect faces
std::vector<pbio::RawSample::Ptr> samples = capturer->capture(image);

for(size_t i = 0; i < samples.size(); ++i)
{
// estimate liveness
const pbio::Liveness2DEstimator::Liveness liveness = liveness_2d_estimator->estimateLiveness(*samples[i]);
}

Examples are available in the demo sample (C++/C#/Android).

Timing Characteristics (ms)

VersionCore i7 4.5 ГГц (Single-Core)Google Pixel 3
liveness_2d_estimator.xml250126 (GPU) / 550 (CPU)
liveness_2d_estimator_v2.xml1020

Quality metrics

DatasetTAR@FAR=1e-2
CASIA Face Anti-spoofing0.99

Active Liveness

This type of liveness estimation presupposes that a user needs to perform certain actions. For example, "turn the head", "blink", etc. Estimation is performed through the VideoWorker object based on the video stream. See more detailed description in Video Stream Processing.

FaceAttributesEstimator

This class is used for estimation of masked faces and state of eyes. To get the score, call the FaceAttributesEstimator.estimate(RawSample) method. The evaluation result is an Attribute object that contains the following attributes:

  • score: the probability that a person has the required attribute, a value from 0 to 1 (if the value is set to -1, then this field is not available for the specified type of assessment).

  • verdict: the probability that a person has the required attribute, boolean value (true/false).

  • mask_attribute: an object of the class/structure FaceAttributesEstimator.FaceAttributes.Attribute, which contains the following values:

    • NOT_COMPUTED: no estimation made.
    • NO_MASK: face without a mask.
    • HAS_MASK: masked face.
  • left_eye_state, right_eye_state: objects of the class/structure FaceAttributesEstimator.FaceAttributes.EyeStateScore, which contains the following values:

    • NOT_COMPUTED: no estimation made.
    • CLOSED: the eye is closed.
    • OPENED: the eye is open.

Mask detection

To check the presence of a mask on the face, use the FaceAttributesEstimator together with the face_mask_estimator.xml configuration file. This returns the score, verdict, mask attributes in the Attribute object.

Improved mask estimation algorithm is available in the configuration file "face_mask_estimator_v2.xml", so far on Windows x86 64-bit or Linux x86 64-bit only.

// create FaceAttributesEstimator object
const pbio::FaceAttributesEstimator::Ptr face_mask_estimator = service->createFaceAttributesEstimator("face_mask_estimator_v2.xml");

// detect faces
std::vector<pbio::RawSample::Ptr> samples = capturer->capture(image);

for(size_t i = 0; i < samples.size(); ++i)
{
// estimate mask presence
const pbio::FaceAttributesEstimator::Attribute mask = face_mask_estimator->estimate(*samples[i]);
}
info

To estimate mask presence on a face through Processing Block API, see Mask Estimation section.

Open/closed eyes

To check the state of the eyes (open/closed), use the FaceAttributesEstimator together with the eyes_openness_estimator_v2.xml configuration file. This returns the left_eye_state, right_eye_state attributes in the Attribute object.

// create FaceAttributesEstimator object
const pbio::FaceAttributesEstimator::Ptr eyes_openness_estimator = service->createFaceAttributesEstimator("eyes_openness_estimator_v2.xml");

// detect faces
std::vector<pbio::RawSample::Ptr> samples = capturer->capture(image);

for(size_t i = 0; i < samples.size(); ++i)
{
// estimate the state of an eyes
const pbio::FaceAttributesEstimator::Attribute eyes_state = eyes_openness_estimator->estimate(*samples[i]);
}