Skip to main content
Version: 3.28 (latest)

Quality control

Facial Image Quality Control Processing Block

Facial Image Quality Control helps reduce recognition errors by excluding low-quality facial images from facial recognition pipeline — for example, images that are noisy, too small, rotated in profile, or otherwise unsuitable.

Biometric templates extracted from such images usually do not accurately match the profiles in the database, which inevitably leads to facial recognition errors.

Face SDK provides several approaches to quality control, corresponding to different modifications of the QUALITY_CONTROL processing block:

  • core — the default QUALITY_CONTROL modification that evaluates the most critical quality parameters (face size, frontal alignment, image noise, etc.) for both recognition and Liveness estimation. The output is a pass/fail verdict along with a list of checks that were not passed.

  • estimation — the modification that uses a neural network to generate a numerical score indicating how suitable the facial image is for recognition. While this score effectively filters out low-quality images, it is hard to interpret, because it does not return direct feedback on what exactly is wrong with the image and how to improve it. This modification is suitable for scenarios where providing end users with guidance on improving their photos is not necessary.

Modification Version Face SDK version Detection time CPU (ms)* Detection time GPU (ms)**
core13.283939
estimation13.19955
* - CPU Intel Xeon E5-2683 v4 (single-core)
** - GPU (NVIDIA GTX 10xx series)

All Quality Control modifications expect an input Context container, which includes a binary image and an array of objects obtained from the Face Detector and Face Fitter processing blocks:

Click to expand the Context input container specification
    {
"image" : {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [height, width, channels]
},
"objects": [{
"id": {"type": "long", "minimum": 0},
"class": "face",
"confidence": {"double", "minimum": 0, "maximum": 1},
"bbox": [x1, y2, x2, y2],
"keypoints": {
"left_eye_brow_left": {"proj" : [x, y]},
"left_eye_brow_up": {"proj" : [x, y]},
"left_eye_brow_right": {"proj" : [x, y]},
"right_eye_brow_left": {"proj" : [x, y]},
"right_eye_brow_up": {"proj" : [x, y]},
"right_eye_brow_right": {"proj" : [x, y]},
"left_eye_left": {"proj" : [x, y]},
"left_eye": {"proj" : [x, y]},
"left_eye_right": {"proj" : [x, y]},
"right_eye_left": {"proj" : [x, y]},
"right_eye": {"proj" : [x, y]},
"right_eye_right": {"proj" : [x, y]},
"left_ear_bottom": {"proj" : [x, y]},
"nose_left": {"proj" : [x, y]},
"nose": {"proj" : [x, y]},
"nose_right": {"proj" : [x, y]},
"right_ear_bottom": {"proj" : [x, y]},
"mouth_left": {"proj" : [x, y]},
"mouth": {"proj" : [x, y]},
"mouth_right": {"proj" : [x, y]},
"chin": {"proj" : [x, y]},
"points": {"proj" : [x, y]}
}
}]
}

core modification

Creating a Processing Block for the core modification is the same as for any other block, but it has a number of specific parameters:

{
"unit_type": "QUALITY_CONTROL",
"modification": "core",
"version": 1,
"mode": ["recognition", "liveness"],
"preset": {"minimal", "optimal", "maximum"},
"{check_name}_threshold": 0
}
  • “mode” — requires passing an array of strings that specify the processing block's operating modes. Currently, the available options are “liveness” and “recognition” — pass either one of these values or both. The modes differ in their sets of checks and default threshold values.

  • “preset” — accepts one of three values: “minimal”, “optimal”, ‘maximum’. The value passed to “preset” determines the specific threshold values for the check:

    • “minimal” — corresponds to the least demanding threshold values. Suitable for most use cases, filters out a small number of images.

    • “optimal” — corresponds to average threshold values. Suitable for cases where image quality issues are not generally expected, discards a larger number of images, and significantly reduces errors.

    • “maximum” — corresponds to the most demanding quality threshold values. Suitable for cases where maximum image quality is required, discards the largest number of low-quality images.

  • You can adjust the threshold for a specific check by passing the parameter “{check_name}_threshold” and the corresponding numerical value to Context, for example, “eyes_distance_check”: 42.

The table below displays a list of checks implemented in the core modification, with threshold values corresponding to different presets. Note that when both modes are enabled at the same time, all checks are applied. If a check is present in both “liveness” and “recognition”, the most stringent threshold is used.

Check Evaluated parameter minimal threshold optimal threshold maximum threshold
noise Maximum acceptable image noise in score from 0 to 1 0.21 0.4 0.5
dynamic_range Maximum acceptable dynamic range of intensity in scores from 0 to 2.55 0.72 0.96 1.36
sharpness Maximum acceptable image sharpness in score from 0 to 1 0.09 0.19 0.33
pitch Maximum acceptable face rotation around the pitch axis in degrees 44 33 23
yaw Maximum acceptable face rotation around the yaw axis in degrees 63 50 50
face_overflow Face out of image in score from 0 to 1 0.0 0.0 0.0
eye_distance Distance between pupils in pixels 25 35 35

The graphs below show, using the “recognition” mode as an example, how certain checks at different threshold values affect the FRR recognition error at a fixed FAR=10E-6. These graphs are based on internal 3DiVi datasets, which correspond to remote identification and cooperative ACS cases.

Output Context Specification

After calling the Quality Control processing block, attributes corresponding to this block will be added to each object from the “objects” array. The format of the output container Context modification core looks as follows:

    {
"quality": {
"value": {"type": "boolean"} // true — all checks passed , false — there are failured checks
"failed_checks":[ // list of failured checks; if value==true this list is empty
{"check": {check_name}, // check name
"result": {chec_value}, // failed result value
},
...
]
}

estimation modification

In the "estimation" modification, a specialized neural network evaluates the overall quality of a facial image. The network is trained to minimize recognition errors, and the output of the check is a "confidence" score ranging from 0 (lowest quality) to 1 (highest quality).

Note that the neural network was optimized not for human-perceived image quality or ISO standards, but for minimizing recognition errors. Therefore, the "confidence" value may be difficult to interpret directly.

This modification is recommended only in scenarios where end-user feedback is not required. For cases such as remote identification or access control, the core modification is typically preferred.

Threshold selection recommendations
  • 0.01 – Basic filter. Only the lowest-quality faces are discarded. Recommended for most cases where it’s important not to lose faces.
  • 0.1 – Stricter option. Suitable for systems where a small loss of detections is acceptable in exchange for improved recognition reliability.
  • 0.4 – Optimal balance for scenarios with high quality requirements and large data flow (for example, access control).

Output Context Specification

After calling the Quality Control processing block, attributes corresponding to this block will be added to each object from the “objects” array.

{
"quality": {
"value": {"type": "boolean"}, // is check passed?
"confidence" : {"type": "double", "minimum": 0, "maximum": 1} // 0 - the worst quality, 1 - the best quality
}
}

Example of working with the Quality Control processing block

  1. Create a Context configuration container and specify the values "unit_type", "modification", "version", of the block. To create a processing block, follow Working with Processing Block.

  2. Pass the container-context obtained after the Face Detector and Face Fitter processing blocks.

  3. Call the evaluation processing block.

  4. Get the result of the processing block.

auto configCtx = service->createContext();
configCtx["unit_type"] = "QUALITY_CONTROL";
pbio::ProcessingBlock blockQuality = service->createProcessingBlock(configCtx);

//------------------
// creation of face detection processing blocks, and Context container with binary image
//------------------

faceDetector(ioData);
faceFitter(ioData);
blockQuality(ioData);

bool passed = ioData["objects"][0]["quality"]["value"].getBool;