Components
Face SDK consists of components which are used to perform the main functional tasks: detection, estimation of faces and human pose, facial recognition and processing of video streams. Face SDK components are implemented as Face SDK API objects and/or processing blocks of Processing Block API.
Detection of Faces, Bodies and Objects
Face Detector
Face Detector is a component for detecting faces in images. The result of the component's work is a list of detected faces with the following attributes:
- Bounding Box (bbox) - coordinates of a rectangle that represents face bounds in the image
- Face Landmarks - 2D/3D coordinates of the anthropometric points of the face
- Iris Landmarks - coordinates of 40 points of the eyes (pupils and eyelids)
- Pitch, Yaw, Roll - head rotation angles
Component Implementation:
- Face SDK API: Capturer object
- Processing Block API: Face Detector processing block
For face detection on videos or time-ordered image sequences, we reccomend that you use Video Engine.
Body Detector
Body Detector is used to detect human bodies in images, which increases the possibility of detecting people in the frame even when the faces are not visible.
The detection result is a bbox that surrounds the detected body.
Component Implementation:
- Processing Block API: Body Detector processing block
Object Detector
Object Detector is used to detect multiple various objects in images.
The detection result is a bbox around the detected object with object class: "body"
, "bicycle"
, "car"
, "motorcycle"
, "bus"
, "train"
, "truck"
, "traffic_light"
, "fire_hydrant"
, "stop_sign"
, "bird"
, "cat"
, "dog"
, "horse"
, "sheep"
, "cow"
, "bear"
, "backpack"
, "umbrella"
, "handbag"
, "suitcase"
, "sports_ball"
, "baseball_bat"
, "skateboard"
, "tennis_racket"
, "bottle"
, "wine_glass"
, "cup"
, "fork"
, "knife"
, "laptop"
, "phone"
, "book"
, "scissors"
.
Component Implementation:
- Processing Block API: Object Detector processing block
Estimation of Faces and Human Pose
Face SDK provides a set of tools for estimating images received from Face Detector component.Gender-Age Estimator
Gender-Age Estimator is used for estimating the gender and age of people by using their face images.
Component Implementation:
- Face SDK API: AgeGenderEstimator object
- Processing Block API: Gender Estimator and Age Estimator processing blocks
Emotions Estimator
Emotions Estimator is used for estimating the prevailing emotional state of a person:
- Happy
- Surprised
- Neutral
- Angry
- Disgusted
- Sad
- Scared
Component Implementation:
- Face SDK API: EmotionsEstimator object
- Processing Block API: Emotion Estimator processing block
Quality Estimator
Quality Estimator is used to estimate the quality of a face in the image. The result is a list of the detected human faces with a detailed quality score.
Component Implementation:
- Face SDK API: QualityEstimator и FaceQualityEstimator object
- Processing Block API: Quality Assessment Estimator processing block
Mask Estimator
Mask Estimator determines the presence/absence of a mask in the face image.
Component Implementation:
- Face SDK API: FaceAttributesEstimator object
- Processing Block API: Mask Estimator processing block
Eyes Openness Estimator
Eyes Openness Estimator is used for estimating the eyes’ state on the face image. This component provides the verdict “open” or “closed” for the right and left eye.
Component Implementation:
- Face SDK API: FaceAttributesEstimator object
- Processing Block API: as a part of Quality Assessment Estimator processing block
Liveness Estimators
Liveness components are used to assess whether the face detected in the image or in video is real or fake. These components protect against malicious actions (spoofing attacks) using printed face image, a photo or video of a face from the screens of mobile devices and monitors, as well as various kinds of masks (paper, silicone, etc.).
Active Liveness Estimator analyzes certain human actions according to the check script, for example: “blink”, “smile”, “turn your head”.
Component Implementation:
- Face SDK API: VideoWorker object
2D / RGB Liveness Estimator assesses the liveness of a face in an RGB image. To perform the check, the appearance of the face in the field of view of the camera is sufficient.
Component Implementation:
- Face SDK API: Liveness2DEstimator object
- Processing Block API: Liveness Estimator processing block
3D / Depth Liveness Estimator protects against attempts to use an image instead of a real face by analyzing the face surface using a depth map obtained from a 3D (RGBD) sensor.
Component Implementation:
- Face SDK API: DepthLivenessEstimator object
IR Liveness Estimator determines the reality of a human face based on an image taken from an infrared camera, combined with a color image.
Component Implementation:
- Face SDK API: IRLivenessEstimator object
Human Pose Estimator
Human Pose Estimator is a component used to estimate human body skeleton keypoints in the image.
Component Implementation:
- Processing Block API: Human Pose Estimator processing block
Facial Recognition
Face SDK provides components and algorithms to recognize faces. This functionality is based on the operations with a biometric face template.Encoder
Encoder extracts biometric face template from a face image received from Face Detector.
A biometric face template is a unique set of biometric features extracted from a face image. Templates are used to compare two face images and to determine a degree of their similarity.
A biometric face template has the following key characteristics:
- It does not contain personal data
- It cannot be used to restore a face image
- It can be serialized and saved to file, database, or sent over a network
- It can be indexed. That helps accelerate face template matching by using a special index for face template batch.
Extracting a biometric template is one of the most computation-heavy operations, so Face SDK provides the ability to use a GPU accelerator to increase performance.
Component Implementation:
- Face SDK API: Recognizer object
Matcher
Matcher allows performing the following comparison operations with templates created by Encoder:
- verification 1:1 - comparing of two biometric templates (faces) between each other, estimating of coincidence
- identification 1:N - comparing of one biometric template (face) with other templates (faces), searching and estimating of coincidence
When comparing face templates, Matcher calculates the difference between biometric features of faces. The calculation result is a measure of coincidence between face images and the probability of belonging to one person.
Templates, extracted using different algorithms, have different properties and cannot be compared with each other.
Component Implementation:
- Face SDK API: Recognizer object
Video Stream Processing
Video Engine
Video Engine is used for real-time processing of video streams. This component solves the following tasks:
- face detection and tracking
- face recognition (optional)
- Liveness checking (optional)
- estimation of age, gender, and emotions (optional)
Video Engine works in a multi-stream mode. Each stream is a sequence of images (frames) obtained from one source (for example, a camera or video).
All streams are processed by Video Engine at the same time. Streams, frames, and detected faces in the frame are assigned their own identifiers. During face tracking, on the sequence of stream images a face track is formed, which is also indicated by its own ID (track_id).
The set of identifiers allows you to accurately track the generated events for each stream. To handle the events, Video Engine implements a callback interface that provides event data.
Component Implementation:
- Face SDK API: VideoWorker object