Quality Assessment
In this section you'll learn how to integrate Quality Assessment Estimator to your C++ or Python project.
Quality Assessment Estimation (C++/Python)
Requirements
- Windows x86 64-bit or Linux x86 64-bit system.
- Installed Face SDK package windows_x86_64 or linux_x86_64 (see Getting Started).
1. Creating a Quality Assessment Estimator
1.1 To create a Quality Assessment Estimator, follow steps 1-3 described in Creating a Processing Block and specify the values:
"QUALITY_ASSESSMENT_ESTIMATOR"
for the"unit_type"
key- Blank string
""
for"model_path"
key;
- C++
- Python
configCtx["unit_type"] = "QUALITY_ASSESSMENT_ESTIMATOR";
configCtx["model_path"] = "";
// optional, default values are specified after "="
// paths specified for examples located in <sdk_dir>/bin
configCtx["sdk_path"] = "..";
configCtx["config_name"] = "quality_assessment.xml";
configCtx["facerec_conf_dir"] = sdk_path + "/conf/facerec/"
configCtx = {
"unit_type": "QUALITY_ASSESSMENT_ESTIMATOR",
"model_path": "",
# optional, default values are specified after ":"
# paths specified for examples located in <sdk_dir>/bin
"sdk_path": "..",
"config_name": "quality_assessment.xml",
"facerec_conf_dir": sdk_path + "/conf/facerec/"
}
1.2 Create a Quality Assessment Estimator Processing Block:
- C++
- Python
pbio::ProcessingBlock qualityAssessmentEstimator = service->createProcessingBlock(configCtx);
qualityAssessmentEstimator = service.create_processing_block(configCtx)
2. Quality Assessment Estimation
2.1 Create a Context container ioData
for input-output data using the createContext()
method:
- C++
- Python
auto ioData = service->createContext();
ioData = {"objects": []}
2.2 Create a Context container imgCtx
with RGB-image in binary format following the steps (1-3, 4b) described on Creating a Context container with RGB-image.
- C++
- Python
// put an image into the container
auto wholeImageCtx = imgCtx["image"];
pbio::context_utils::putImage(wholeImageCtx, input_rawimg);
// add `"objects"` key
auto objectsCtx = imgCtx["objects"];
# copy an image into the binary format
input_rawimg = image.tobytes()
# put an image into the container
imageCtx = {
"blob": input_rawimg,
"dtype": "uint8_t",
"format": "NDARRAY",
"shape": [dim for dim in img.shape]
}
2.3.1 Create Capturer
object. See class description in a section Capturer Class:
- C++
- Python
const pbio::Capturer::Ptr capturer = service->createCapturer("common_capturer_refa_fda_a.xml");
capturer = service.create_capturer(Config("common_capturer_refa_fda_a.xml"))
2.3.2 Detect faces using capture
method:
- C++
- Python
std::vector<pbio::RawSample::Ptr> samples = capturer->capture(input_rawimg);
samples = capturer.capture(input_rawimg)
2.4 Create a Context container capture_result
. Put a binary image to the container using image
key:
capture_result = service->createContext();
auto wholeImageCtx = capture_result["image"];
pbio::context_utils::putImage(wholeImageCtx, input_rawimg);
2.5.1 For each detected face create a Context-container faceData
and pass a container with an image using "image_ptr"
key:
- C++
- Python
for(auto &sample: samples)
{
auto faceData = service->createContext();
faceData["image_ptr"] = capture_result["image"];
for sample in samples:
faceData = {"image_ptr": imageСtx}
2.5.2 Convert a RawSample
object to the Context-container faceData
:
- C++
- Python
pbio::context_utils::putRawSample(faceData, sample, "fda", input_rawimg.width, input_rawimg.height);
frame = sample.get_rectangle()
faceData["bbox"] = [
float(frame.x / img_shape[1]),
float(frame.y / img_shape[0]),
float((frame.x + bbx.width) / img_shape[1]),
float((frame.y + bbx.height) / img_shape[0]),
]
faceData["confidence"] = sample.get_score()
faceData["class"] = "face"
fitter_data = {}
fitter_data["keypoints"] = []
fitter_data["fitter_type"] = "fda" # "lbf29", "lbf68" или "esr"
points = sample.get_landmarks()
for pt in points:
fitter_data["keypoints"].append(pt.x)
fitter_data["keypoints"].append(pt.y)
fitter_data["keypoints"].append(pt.z)
fitter_data["left_eye"] = [sample.get_left_eye().x, sample.get_left_eye().y]
fitter_data["right_eye"] = [sample.get_right_eye().x, sample.get_right_eye().y]
faceData["fitter"] = fitter_data
faceData["angles"] = {}
faceData["angles"]["yaw"] = sample.get_angles().yaw
faceData["angles"]["pitch"] = sample.get_angles().pitch
faceData["angles"]["roll"] = sample.get_angles().roll
faceData["id"] = sample.get_id()
2.5.3 Call qualityAssessmentEstimator()
and pass a Context-container faceData
that contains the image:
- C++
- Python
qualityAssessmentEstimator(faceData);
qualityAssessmentEstimator(faceData)
2.5.4 Move block processing result to the container ioData
:
- C++
- Python
objectsCtx.push_back(std::move(faceData));
}
ioData = std::move(capture_result);
ioData["objects"].append(faceData)
Accurate estimation requires only one person's face in the frame, looking at the camera, otherwise the overall score will be low, since the algorithm takes into account the relative size, position and directionality. If multiple faces are captured, each will be processed independently.
The result of calling qualityAssessmentEstimator()
will be appended to ioData
container.
Output data format is given as context with the keys "quality":"qaa"
.
The "quality":"qaa"
keys contain a Context with a full set of estimations score
"totalScore"
key contains an overall score of type long in a range of [0,100]
/*
{
"quality": {
"qaa": {
"totalScore": {"type": "long", "minimum": 0, "maximum": 100},
"isSharp": {"type": "boolean"},
"sharpnessScore": {"type": "long", "minimum": 0, "maximum": 100},
"isEvenlyIlluminated": {"type": "boolean"},
"illuminationScore": {"type": "long", "minimum": 0, "maximum": 100},
"noFlare": {"type": "boolean"},
"isLeftEyeOpened": {"type": "boolean"},
"leftEyeOpennessScore": {"type": "long", "minimum": 0, "maximum": 100},
"isRightEyeOpened": {"type": "boolean"},
"rightEyeOpennessScore": {"type": "long", "minimum": 0, "maximum": 100},
"isRotationAcceptable": {"type": "boolean"},
"maxRotationDeviation": {"type": "long"},
"notMasked": {"type": "boolean"},
"notMaskedScore": {"type": "long", "minimum": 0, "maximum": 100},
"isNeutralEmotion": {"type": "boolean"},
"neutralEmotionScore": {"type": "long", "minimum": 0, "maximum": 100},
"isEyesDistanceAcceptable": {"type": "boolean"},
"eyesDistance": {"type": "long", "minimum": 0}
"isMarginsAcceptable": {"type": "boolean"},
"marginOuterDeviation": {"type": "long", "minimum": 0}
"marginInnerDeviation": {"type": "long", "minimum": 0}
"isNotNoisy": {"type": "boolean"},
"noiseScore": {"type": "long", "minimum": 0, "maximum": 100},
"watermarkScore": {"type": "long", "minimum": 0},
"hasWatermark": {"type": "boolean"},
"dynamicRangeScore": {"type": "long", "minimum": 0},
"isDynamicRangeAcceptable": {"type": "boolean"},
"isBackgroundUniform": {"type": "boolean"},
"backgroundUniformityScore": {"type": "long", "minimum": 0, "maximum": 100}
}
}
}
*/
Quality Assessment Estimator usage examples:
3. GPU Acceleration
Quality Assessment doesn't support GPU acceleration by itself, but the involved modules, listed in the config file, can have their own acceleration capabilities.