Quality Assessment
In this section you'll learn how to integrate Quality Assessment Estimator to your C++ or Python project.
Quality Assessment Estimation (C++/Python)
1. Create a Quality Assessment Estimator
1.1 To create a Quality Assessment Estimator, follow steps 1-3 described in Creating a Processing Block and specify the values:
"QUALITY_ASSESSMENT_ESTIMATOR"
for the"unit_type"
key- Blank string
""
for"model_path"
key; - For the
"modification"
key, you need to add one of two modes of operation of the block:
"assessment"
- This is the first mode for assessing photo quality; in this mode, the following photo parameters are evaluated:'background_uniformity_score'
,'dynamic_range_score'
,'eyes_distance'
,'has_watermark'
,'illumination_score'
,'is_background_uniform'
,'is_dynamic_range_acceptable'
,'is_evenly_illuminated'
,'is_eyes_distance_acceptable'
,'is_left_eye_opened'
,'is_margins_acceptable'
,'is_neutral_emotion'
,'is_not_noisy'
,'is_right_eye_opened'
,'is_rotation_acceptable'
,'is_sharp'
,'left_eye_openness_score'
,'margin_inner_deviation'
,'margin_outer_deviation'
,' max_rotation_deviation'
,'neutral_emotion_score
' ,'no_flare'
,'noise_score'
,'not_masked'
,'not_masked_score'
,'right_eye_openness_score'
,'sharpness_score'
,'total_score'
,'watermark_score'
;"estimation"
- This is the second mode for assessing photo quality; the result of processing will be one value from 0 to 1:'total_score'
;
- C++
- Python
configCtx["unit_type"] = "QUALITY_ASSESSMENT_ESTIMATOR";
//specify one key
configCtx["modification"] = "assessment"; //to evaluate metrics and obtain their results
// or
configCtx["modification"] = "estimation"; //to evaluate metrics and obtain photo quality values
// optional, default values are specified after "="
// paths specified for examples located in <sdk_dir>/bin
configCtx["sdk_path"] = "..";
configCtx["config_name"] = "quality_assessment.xml";
configCtx["facerec_conf_dir"] = sdk_path + "/conf/facerec/"
configCtx = {
"unit_type": "QUALITY_ASSESSMENT_ESTIMATOR",
#specify one key
"modification": "assessment", # to evaluate metrics and obtain their results
# or
"modification": "estimation", # to evaluate metrics and obtain photo quality values
# optional, default values are specified after ":"
# paths specified for examples located in <sdk_dir>/bin
"sdk_path": "..",
"config_name": "quality_assessment.xml",
"facerec_conf_dir": sdk_path + "/conf/facerec/"
}
1.2 Create a Quality Assessment Estimator Processing Block:
- C++
- Python
- Flutter
pbio::ProcessingBlock qualityAssessmentEstimator = service->createProcessingBlock(configCtx);
qualityAssessmentEstimator = service.create_processing_block(configCtx)
ProcessingBlock qualityAssessmentEstimator = service.createProcessingBlock({
"unit_type": "QUALITY_ASSESSMENT_ESTIMATOR",
});
2. Quality Assessment Estimation
2.1 Create a Context container ioData
for input-output data using the createContext()
method:
- C++
- Python
auto ioData = service->createContext();
ioData = service.create_context({})
2.2 Create a Context container imgCtx
with RGB-image in binary format following the steps (1-3, 4b) described on Creating a Context container with RGB-image.
- C++
- Python
- Flutter
// put an image into the container
auto imgCtx = ioData["image"];
pbio::context_utils::putImage(imgCtx, input_rawimg);
# copy an image into the binary format
input_rawimg = image.tobytes()
# put an image into the container
imageCtx = {
"blob": input_rawimg,
"dtype": "uint8_t",
"format": "NDARRAY",
"shape": [dim for dim in image.shape]
}
ioData["image"] = imageCtx
}
File file = File(imagePath);
final Uint8List bytes = await file.readAsBytes();
final ImageDescriptor descriptor = await ImageDescriptor.encoded(await ImmutableBuffer.fromUint8List(bytes));
Context ioData = service.createContext({
"objects": [],
"image": {
"blob": bytes,
"dtype": "uint8_t",
"format": "NDARRAY",
"shape": [descriptor.height, descriptor.width, 3]
}
});
2.3.1 Create Capturer
object. See class description in a section Capturer Class:
- C++
- Python
- Flutter
const pbio::Capturer::Ptr capturer = service->createCapturer("common_capturer_refa_fda_a.xml");
capturer = service.create_capturer(Config("common_capturer_refa_fda_a.xml"))
Capturer capturer = service.createCapturer("common_capturer_refa_fda_a.xml");
2.3.2 Detect faces using capture
method:
- C++
- Python
- Flutter
std::vector<pbio::RawSample::Ptr> samples = capturer->capture(input_rawimg);
samples = capturer.capture(input_rawimg)
List<RawSample> samples = capturer.capture(bytes);
2.4.1 Convert a RawSample
object to the Context-container and put it in ioData
for key objects
:
- C++
- Python
- Flutter
auto objectsCtx = ioData["objects"];
for(auto &sample: samples)
{
objectsCtx.push_back(sample->toContext());
}
ioData["objects"] = []
for sample in samples:
ioData["objects"].push_back(sample.to_context())
for (RawSample sample in samples) {
ioData["objects"].pushBack(sample.toContext());
}
2.4.2 Call qualityAssessmentEstimator()
and pass a Context-container ioData
that contains the image:
- C++
- Python
- Flutter
qualityAssessmentEstimator(ioData);
qualityAssessmentEstimator(ioData)
Context ioData = qualityAssessmentEstimator.process(ioData);
Accurate estimation requires only one person's face in the frame, looking at the camera, otherwise the overall score will be low, since the algorithm takes into account the relative size, position and directionality. If multiple faces are captured, each will be processed independently.
The result of calling qualityAssessmentEstimator()
will be appended to ioData
container.
Output data format is given as context with the keys "quality"
.
The "quality"
keys contain a Context with a full set of estimations score
"totalScore"
key contains an overall score of type long in a range of [0,100]
assessment modification
/*
{
"quality": {
"total_score": {"type": "double", "minimum": 0, "maximum": 100},
"is_sharp": {"type": "boolean"},
"sharpness_score": {"type": "double", "minimum": 0, "maximum": 100},
"is_evenly_illuminated": {"type": "boolean"},
"illumination_score": {"type": "double", "minimum": 0, "maximum": 100},
"no_flare": {"type": "boolean"},
"is_left_eye_opened": {"type": "boolean"},
"left_eye_openness_score": {"type": "double", "minimum": 0, "maximum": 100},
"is_right_eye_opened": {"type": "boolean"},
"right_eye_openness_score": {"type": "double", "minimum": 0, "maximum": 100},
"is_rotation_acceptable": {"type": "boolean"},
"max_rotation_deviation": {"type": "long"},
"not_masked": {"type": "boolean"},
"not_masked_score": {"type": "double", "minimum": 0, "maximum": 100},
"is_neutral_emotion": {"type": "boolean"},
"neutral_emotion_score": {"type": "double", "minimum": 0, "maximum": 100},
"is_eyesdistance_acceptable": {"type": "boolean"},
"eyes_distance": {"type": "long", "minimum": 0}
"is_margins_acceptable": {"type": "boolean"},
"margin_outer_deviation": {"type": "long", "minimum": 0}
"margin_inner_deviation": {"type": "long", "minimum": 0}
"is_Not_noisy": {"type": "boolean"},
"noise_score": {"type": "double", "minimum": 0, "maximum": 100},
"watermark_score": {"type": "double", "minimum": 0},
"has_watermark": {"type": "boolean"},
"dynamic_range_score": {"type": "double", "minimum": 0},
"is_dynamic_range_acceptable": {"type": "boolean"},
"is_background_uniform": {"type": "boolean"},
"background_uniformity_score": {"type": "double", "minimum": 0, "maximum": 100}
}
}
*/
estimation modification
/*
{
"quality": {
"totalScore": {"type": "long", "minimum": 0, "maximum": 100},
}
}
*/
3. GPU Acceleration
- assessment modification: Quality Assessment doesn't support GPU acceleration by itself, but the involved modules, listed in the config file, can have their own acceleration capabilities.