Quality assessment
Facial image quality assessment block
Modifications to the quality assessment block processor
Currently, there are the following modifications:
"assessment"
— the first of the implemented modes of the Quality Assessment block, which evaluates the following parameters.
Parameters evaluating "assessment"
modification
Click to display the list of parameters being evaluated
"estimation"
— in this mode the quality of the image as a whole is assessed, and the result (total_score
) is a real number from 0 (worst quality) to 1 (perfect quality).
Specification of the quality assessment processing block
- The Context input container must contain a binary image and an array of objects obtained after running the face detection and fitter detection processing blocks:
Click to expand the Context input container specification
- After the quality assessment processing block is called, each object from the
"objects"
array will be added with attributes corresponding to this block.
Specification of the output container Context:
- assessment
- estimation
{
"quality": {
"total_score": {"type": "double", "minimum": 0, "maximum": 1},
"is_sharp": {"type": "boolean"},
"sharpness_score": {"type": "double", "minimum": 0, "maximum": 1},
"is_evenly_illuminated": {"type": "boolean"},
"illumination_score": {"type": "double", "minimum": 0, "maximum": 1},
"no_flare": {"type": "boolean"},
"is_left_eye_opened": {"type": "boolean"},
"left_eye_openness_score": {"type": "double", "minimum": 0, "maximum": 1},
"is_right_eye_opened": {"type": "boolean"},
"right_eye_openness_score": {"type": "double", "minimum": 0, "maximum": 1},
"is_rotation_acceptable": {"type": "boolean"},
"max_rotation_deviation": {"type": "long"},
"not_masked": {"type": "boolean"},
"not_masked_score": {"type": "double", "minimum": 0, "maximum": 1},
"is_neutral_emotion": {"type": "boolean"},
"neutral_emotion_score": {"type": "double", "minimum": 0, "maximum": 1},
"is_eyes_distance_acceptable": {"type": "boolean"},
"eyes_distance": {"type": "long", "minimum": 0}
"is_margins_acceptable": {"type": "boolean"},
"margin_outer_deviation": {"type": "long", "minimum": 0}
"margin_inner_deviation": {"type": "long", "minimum": 0}
"is_not_noisy": {"type": "boolean"},
"noise_score": {"type": "double", "minimum": 0, "maximum": 1},
"watermark_score": {"type": "long", "minimum": 0},
"has_watermark": {"type": "boolean"},
"dynamic_range_score": {"type": "double", "minimum": 0},
"is_dynamic_range_acceptable": {"type": "boolean"},
"is_background_uniform": {"type": "boolean"},
"background_uniformity_score": {"type": "double", "minimum": 0, "maximum": 1}
}
}
[{
"quality": {
"total_score": {"type": "double", "minimum": 0, "maximum": 1}
}
}]
Example of working with the quality assessment processing block
Create a Context configuration container and specify the values
"unit_type"
,"modification"
,"version"
, of the block you are interested in An example of creating a processing block can be found on the Working with Processing Block page.Pass the container-context obtained after the processing blocks of face detection and fitter's detection work.
Call the evaluation processing block.
Get the result of the processing block.
- C++
- Python
- Flutter
- C#
- Java
- Kotlin
auto configCtx = service->createContext();
configCtx["unit_type"] = "QUALITY_ASSESSMENT_ESTIMATOR";
pbio::ProcessingBlock blockQuality = service->createProcessingBlock(configCtx);
//------------------
// creation of face detection processing blocks, and Context container with binary image
//------------------
faceDetector(ioData);
faceFitter(ioData);
blockQuality(ioData);
double total_score = ioData["objects"][0]["quality"]["total_score"].getDouble();
configCtx = {"unit_type": "QUALITY_ASSESSMENT_ESTIMATOR"}
blockQuality = service.create_processing_block(configCtx)
#------------------
# creation of face detection processing blocks, and Context container with binary image
#------------------
faceDetector(ioData)
faceFitter(ioData)
blockQuality(ioData)
total_score = ioData["objects"][0]["quality"]["total_score"].get_value()
ProcessingBlock blockQuality = service.createProcessingBlock({"unit_type": "QUALITY_ASSESSMENT_ESTIMATOR"});
//------------------
// creation of face detection processing blocks, and Context container with binary image
//------------------
faceDetector.process(ioData);
faceFitter.process(ioData);
blockQuality.process(ioData);
double total_score = ioData["objects"][0]["quality"]["total_score"].get_value();
Dictionary<object, object> configCtx = new();
configCtx["unit_type"] = "QUALITY_ASSESSMENT_ESTIMATOR";
ProcessingBlock blockQuality = service.CreateProcessingBlock(configCtx);
//------------------
// creation of face detection processing blocks, and Context container with binary image
//------------------
faceDetector.Invoke(ioData);
faceFitter.Invoke(ioData);
blockQuality.Invoke(ioData);
double total_score = ioData["objects"][0]["quality"]["total_score"].GetDouble();
Context configCtx = service.createContext();
configCtx.get("unit_type").setString("QUALITY_ASSESSMENT_ESTIMATOR");
ProcessingBlock blockQuality = service.createProcessingBlock(configCtx);
//------------------
// creation of face detection processing blocks, and Context container with binary image
//------------------
faceDetector.process(ioData);
faceFitter.process(ioData);
blockQuality.process(ioData);
double total_score = ioData.get("objects").get(0).get("quality").get("total_score").getDouble();
val configCtx = service.createContext()
configCtx["unit_type"].string = "QUALITY_ASSESSMENT_ESTIMATOR"
val blockQuality = service.createProcessingBlock(configCtx)
//------------------
// creation of face detection processing blocks, and Context container with binary image
//------------------
faceDetector.process(ioData)
faceFitter.process(ioData)
blockQuality.process(ioData)
val total_score = ioData["objects"][0]["quality"]["total_score"].double
Only one face of a person in the frame looking into the camera is required for an accurate score, otherwise the overall score will be low, because the algorithm takes into account the relative size, position and orientation of the head.
If multiple faces are captured, each face will be processed independently.