Skip to main content
Version: 3.22.2

Quality assessment

Facial image quality assessment block

Modifications to the quality assessment block processor

Currently, there are the following modifications:

  • "assessment" — the first of the implemented modes of the Quality Assessment block, which evaluates the following parameters.

Parameters evaluating "assessment" modification

Click to display the list of parameters being evaluated
  • total_score — a numeric value that indicates the overall image quality score from 0 to 1.
  • is_background_uniform — a boolean value, indicates the uniformity of the background.
  • background_uniformity_score — numeric value, indicates the background uniformity score in points from 0 to 2.
  • is_dynamic_range_acceptable — a logical value, indicates that the dynamic range of the image intensity in the face area is greater or less than the value of 1.28.
  • dynamic_range_score — numeric value, a dynamic intensity range score in a score from 0 to 1.
  • is_eyes_distance_acceptable — a logical value, indicates the acceptable/unacceptable distance between eyes.
  • eyes_distance — numeric value of the distance between eyes in pixels.
  • is_evenly_illuminated — boolean value, denotes uniformity of illumination in the image.
  • illumination_score — numeric value, indicates the uniformity of illumination in points from 0 to 1.
  • no_flare — a logical value, indicates the presence or absence of flare in the image.
  • is_left_eye_opened — logical value, indicates the position of the left eye (open or closed).
  • left_eye_openness_score — numeric value, indicates the degree of eye openness in points from 0 to 1.
  • is_right_eye_opened — logical value, indicates the position of the right eye (open or closed).
  • right_eye_openness_score — numeric value, indicates the degree of eye openness in scores from 0 to 1.
  • is_neutral_emotion — logical value, denotes the presence or absence of a neutral facial expression.
  • neutral_emotion_score — numerical value, scores the degree of neutral emotion in points from 0 to 1.
  • is_not_noisy — logical value, denotes the presence or absence of noise in the image.
  • noise_score — numerical value, evaluation of image noise in scores from 0 to 1.
  • is_sharp — logical value, denotes sharpness of the image.
  • sharpness_score — numeric value, denotes the sharpness score in points from 0 to 1.
  • is_margins_acceptable — a logical value, denotes acceptable/unacceptable indents.
  • margin_inner_deviation — numeric value of internal deviation in pixels.
  • margin_outer_deviation — numeric value of outer deviation in pixels.
  • is_rotation_acceptable — boolean value, indicates acceptable/unacceptable head rotation.
  • max_rotation_deviation — numeric value, maximum degree of deviation for three (yaw, pitch, roll) head rotation angles.
  • not_masked — boolean value, indicates the presence or absence of a mask on the face.
  • not_masked_score — numeric value, indicates the degree of confidence in the absence of a mask on the face from 0 to 1.
  • has_watermark — boolean value, indicates the presence or absence of a watermark on the image.
  • watermark_score — numeric value, indicates the degree of confidence in the presence of a watermark on the image from 0 to 1.
  • "estimation" — in this mode the quality of the image as a whole is assessed, and the result (total_score) is a real number from 0 (worst quality) to 1 (perfect quality).

Specification of the quality assessment processing block

  1. The Context input container must contain a binary image and an array of objects obtained after running the face detection and fitter detection processing blocks:
Click to expand the Context input container specification
{
"image" : {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [height, width, channels]
},
"objects": [{
"id": {"type": "long", "minimum": 0},
"class": "face",
"confidence": {"double", "minimum": 0, "maximum": 1},
"bbox": [x1, y2, x2, y2]
"keypoints": {
"left_eye_brow_left": {"proj" : [x, y]},
"left_eye_brow_up": {"proj" : [x, y]},
"left_eye_brow_right": {"proj" : [x, y]},
"right_eye_brow_left": {"proj" : [x, y]},
"right_eye_brow_up": {"proj" : [x, y]},
"right_eye_brow_right": {"proj" : [x, y]},
"left_eye_left": {"proj" : [x, y]},
"left_eye": {"proj" : [x, y]},
"left_eye_right": {"proj" : [x, y]},
"right_eye_left": {"proj" : [x, y]},
"right_eye": {"proj" : [x, y]},
"right_eye_right": {"proj" : [x, y]},
"left_ear_bottom": {"proj" : [x, y]},
"nose_left": {"proj" : [x, y]},
"nose": {"proj" : [x, y]},
"nose_right": {"proj" : [x, y]},
"right_ear_bottom": {"proj" : [x, y]},
"mouth_left": {"proj" : [x, y]},
"mouth": {"proj" : [x, y]},
"mouth_right": {"proj" : [x, y]},
"chin": {"proj" : [x, y]},
"points": ["proj": [x, y]]
}
}]
}
  1. After the quality assessment processing block is called, each object from the "objects" array will be added with attributes corresponding to this block.

Specification of the output container Context:

{
"quality": {
"total_score": {"type": "double", "minimum": 0, "maximum": 1},
"is_sharp": {"type": "boolean"},
"sharpness_score": {"type": "double", "minimum": 0, "maximum": 1},
"is_evenly_illuminated": {"type": "boolean"},
"illumination_score": {"type": "double", "minimum": 0, "maximum": 1},
"no_flare": {"type": "boolean"},
"is_left_eye_opened": {"type": "boolean"},
"left_eye_openness_score": {"type": "double", "minimum": 0, "maximum": 1},
"is_right_eye_opened": {"type": "boolean"},
"right_eye_openness_score": {"type": "double", "minimum": 0, "maximum": 1},
"is_rotation_acceptable": {"type": "boolean"},
"max_rotation_deviation": {"type": "long"},
"not_masked": {"type": "boolean"},
"not_masked_score": {"type": "double", "minimum": 0, "maximum": 1},
"is_neutral_emotion": {"type": "boolean"},
"neutral_emotion_score": {"type": "double", "minimum": 0, "maximum": 1},
"is_eyes_distance_acceptable": {"type": "boolean"},
"eyes_distance": {"type": "long", "minimum": 0}
"is_margins_acceptable": {"type": "boolean"},
"margin_outer_deviation": {"type": "long", "minimum": 0}
"margin_inner_deviation": {"type": "long", "minimum": 0}
"is_not_noisy": {"type": "boolean"},
"noise_score": {"type": "double", "minimum": 0, "maximum": 1},
"watermark_score": {"type": "long", "minimum": 0},
"has_watermark": {"type": "boolean"},
"dynamic_range_score": {"type": "double", "minimum": 0},
"is_dynamic_range_acceptable": {"type": "boolean"},
"is_background_uniform": {"type": "boolean"},
"background_uniformity_score": {"type": "double", "minimum": 0, "maximum": 1}
}
}

Example of working with the quality assessment processing block

  1. Create a Context configuration container and specify the values "unit_type", "modification", "version", of the block you are interested in An example of creating a processing block can be found on the Working with Processing Block page.

  2. Pass the container-context obtained after the processing blocks of face detection and fitter's detection work.

  3. Call the evaluation processing block.

  4. Get the result of the processing block.

auto configCtx = service->createContext();
configCtx["unit_type"] = "QUALITY_ASSESSMENT_ESTIMATOR";
pbio::ProcessingBlock blockQuality = service->createProcessingBlock(configCtx);

//------------------
// creation of face detection processing blocks, and Context container with binary image
//------------------

faceDetector(ioData);
faceFitter(ioData);
blockQuality(ioData);

double total_score = ioData["objects"][0]["quality"]["total_score"].getDouble();

Only one face of a person in the frame looking into the camera is required for an accurate score, otherwise the overall score will be low, because the algorithm takes into account the relative size, position and orientation of the head.

If multiple faces are captured, each face will be processed independently.