Face Estimation
Age & Gender
Note: If you need to estimate age and gender on a video stream, see Estimation of age, gender, and emotions in the section Video Stream Processing.
For age and gender estimation, create the AgeGenderEstimator
class by calling the FacerecService.createAgeGenderEstimator
method, providing the configuration file.
Currently, two configuration files are available:
age_gender_estimator.xml
- first implementation of the AgeGenderEstimator interfaceage_gender_estimator_v2.xml
- improved version of the AgeGenderEstimator interface, which provides higher accuracy of age and gender estimation given that you follow Guidelines for Cameras
With AgeGenderEstimator
you can estimate age and gender of a captured face using AgeGenderEstimator.estimateAgeGender
. The result is the AgeGenderEstimator.AgeGender
struct containing the number of ages (in years), age group (AgeGenderEstimator.Age
) and gender (AgeGenderEstimator.Gender
). See the example of using the AgeGenderEstimator
in demo.cpp.
Learn how to estimate Age & Gender in an image in our tutorial Estimating Age, Gender, and Emotions.
Quality
At the moment, there are two quality estimation interfaces: QualityEstimator
and FaceQualityEstimator
.
QualityEstimator
provides discrete grade of quality for flare, lighting, noise and sharpness.FaceQualityEstimator
provides quality as a single real value that aggregates sample usability for face recognition (i.e. pose, occlusion, noise, blur and lighting), which is very useful for comparing samples of one person from video tracking.
QualityEstimator
To create the QualityEstimator
object, call the FacerecService.createQualityEstimator
method by passing the configuration file. Currently, two configuration files are available:
quality_estimator.xml
– first implementation of the QualityEstimator quality estimation interfacequality_estimator_iso.xml
(recommended) – improved version of the QualityEstimator quality estimation interface, provides higher accuracy of quality estimation
With QualityEstimator
you can estimate the quality of a captured face using QualityEstimator.estimateQuality
. The result is the QualityEstimator.Quality
structure that contains estimated flare, lighting, noise, and sharpness level.
See the example of using the QualityEstimator
in demo.cpp.
FaceQualityEstimator
To create the FaceQualityEstimator
object, call the FacerecService.createFaceQualityEstimator
method by passing the configuration file. Currently, there is only one configuration file available, which is face_quality_estimator.xml. With FaceQualityEstimator
you can estimate the quality of a captured face using FaceQualityEstimator.estimateQuality
. This results in a real number (the greater it is, the higher the quality), which aggregates sample usability for face recognition. See the example of using the FaceQualityEstimator
in demo.cpp.
Liveness
The main purpose of liveness estimation is to prevent spoofing attacks (using a photo of a person instead of a real face). Currently, you can estimate liveness in one of three ways - by processing a depth map, by processing an IR image or by processing an RGB image from your camera.
Learn how to estimate liveness of a face in our tutorial Liveness Detection.
DepthLivenessEstimator
To estimate liveness with a depth map, you should create the DepthLivenessEstimator
object using FacerecService.createDepthLivenessEstimator
.
The following configuration files are available:
- depth_liveness_estimator.xml – the first implementation (not recommended; used only for backward compatibility);
- depth_liveness_estimator_cnn.xml – implementation based on neural networks (recommended, used in
VideoWorker
by default).
To use this algorithm, it is necessary to obtain synchronized and registered frames (color image + depth map) and use a color image for face tracking / detection and to pass the corresponding depth map to the DepthLivenessEstimator.estimateLiveness
method.
To get an estimated result, you can call the pbio.DepthLivenessEstimator.estimateLiveness
method. You will get one of the following results:
DepthLivenessEstimator.NOT_ENOUGH_DATA
– too many missing depth values on the depth map.DepthLivenessEstimator.REAL
– the observed face belongs to a living person.DepthLivenessEstimator.FAKE
– the observed face is taken from a photo.
IRLivenessEstimator
To estimate liveness using an infrared image from a camera, you should create the IRLivenessEstimator
object using the FacerecService.createIRLivenessEstimator
method. Currently, only one configuration file is available – ir_liveness_estimator_cnn.xml (implementation based on neural networks). To use this algorithm, you have to get color frames from the camera in addition to the IR frames.
To get an estimated result, you can call the IRLivenessEstimator.estimateLiveness
method. You will get one of the following results:
IRLivenessEstimator.Liveness.NOT_ENOUGH_DATA
– too many missing values in the IR image.IRLivenessEstimator.Liveness.REAL
– the observed face belongs to a living person.IRLivenessEstimator.Liveness.FAKE
– the observed face is taken from a photo.
Liveness2DEstimator
To estimate liveness with an RGB map, you should create the Liveness2DEstimator
object using the FacerecService.createLiveness2DEstimator
method. Currently, two configuration files are available:
liveness_2d_estimator.xml
– the first implementation (not recommended; used only for backward compatibility)liveness_2d_estimator_v2.xml
– an accelerated and improved version of the current module, recommended
Two methods can be used to obtain the evaluation result:
Liveness2DEstimator.estimateLiveness
. This method returns aLiveness2DEstimator.Liveness
object. The result will be one of the following:Liveness2DEstimator.Liveness.NOT_ENOUGH_DATA
– not enough data to make a decisionLiveness2DEstimator.Liveness.REAL
– the observed person belongs to a living personLiveness2DEstimator.Liveness.FAKE
– the observed face is take from a photo
Liveness2DEstimator.estimate
. This method returns aLiveness2DEstimator.LivenessAndScore
object that contains the following fields:liveness
- object of theLiveness2DEstimator.Liveness
class/structure (see above)score
– the probability that a face belongs to a living person (forliveness_2d_estimator.xml
this field is not available, a value of 0 or 1 is returned depending on the value of theliveness
attribute)
Both methods take a RawSample
object as output. Examples are available in the demo sample (C++/C#/Android).
Note: the LivenessEstimator
object in Face SDK C++/C#/Java API is deprecated.
Timing Characteristics (ms)
Version | Core i7 4.5 ГГц (Single-Core) | Google Pixel 3 |
liveness_2d_estimator.xml | 250 | 126 (GPU) / 550 (CPU) |
liveness_2d_estimator_v2.xml | 10 | 20 |
Quality metrics
Dataset | TAR@FAR=1e-2 |
CASIA Face Anti-spoofing | 0.99 |
Emotions
Note: If you need to estimate emotions on a video stream, see Estimation of age, gender, and emotions in the section Video Stream Processing.
To estimate emotions, create the EmotionsEstimator
object using FacerecService.createEmotionsEstimator
and pass the configuration file. Currently, there is only one configuration file, which is emotions_estimator.xml.
With the EmotionsEstimator
object you can estimate the emotion of a captured face using the EmotionsEstimator.estimateEmotions
method. The result is a vector with the EmotionsEstimator.EmotionConfidence
elements containing emotions with a confidence value. See the example of using the EmotionsEstimator
object in demo.cpp.
Presence of a Mask on the Face
To check the presence of a mask on a face, the FaceAttributesEstimator
module and the face_mask_estimator.xml
configuration file are available. For estimation, call the FaceAttributesEstimator.estimate(RawSample)
method. The result of the evaluation is an object Attribute
, which contains the following fields:
score
– possibility that there's a mask on the face, value from 0 to 1verdict
– possibility that there's a mask on the face, boolean value (true
/false
)mask_attribute
– an object of theFaceAttributesEstimator.FaceAttributes.Attribute
class/structure, which contains the following values:NOT_COMPUTED
– the attribute was not estimatedNO_MASK
– there's no mask on the faceHAS_MASK
– there's a mask on the face