Skip to main content
Version: 3.18.2

Quality Assessment

In this section you'll learn how to integrate Quality Assessment Estimator to your C++ or Python project.

Quality Assessment Estimation (C++/Python)

Requirements

  • Windows x86 64-bit or Linux x86 64-bit system.
  • Installed Face SDK package windows_x86_64 or linux_x86_64 (see Getting Started).

1. Creating a Quality Assessment Estimator

1.1 To create a Quality Assessment Estimator, follow steps 1-3 described in Creating a Processing Block and specify the values:

  • "QUALITY_ASSESSMENT_ESTIMATOR" for the "unit_type" key
  • Blank string "" for "model_path" key;
configCtx["unit_type"] = "QUALITY_ASSESSMENT_ESTIMATOR";
configCtx["model_path"] = "";

// optional, default values are specified after "="
// paths specified for examples located in <sdk_dir>/bin
configCtx["sdk_path"] = "..";
configCtx["config_name"] = "quality_assessment.xml";
configCtx["facerec_conf_dir"] = sdk_path + "/conf/facerec/"

1.2 Create a Quality Assessment Estimator Processing Block:

pbio::ProcessingBlock qualityAssessmentEstimator = service->createProcessingBlock(configCtx);

2. Quality Assessment Estimation

2.1 Create a Context container ioData for input-output data using the createContext() method:

auto ioData = service->createContext();

2.2 Create a Context container imgCtx with RGB-image in binary format following the steps (1-3, 4b) described on Creating a Context container with RGB-image.

// put an image into the container
auto wholeImageCtx = imgCtx["image"];
pbio::context_utils::putImage(wholeImageCtx, input_rawimg);
// add `"objects"` key
auto objectsCtx = imgCtx["objects"];

2.3.1 Create Capturer object. See class description in a section Capturer Class:

const pbio::Capturer::Ptr capturer = service->createCapturer("common_capturer_refa_fda_a.xml");

2.3.2 Detect faces using capture method:

std::vector<pbio::RawSample::Ptr> samples = capturer->capture(input_rawimg);

2.4 Create a Context container capture_result. Put a binary image to the container using image key:

capture_result = service->createContext();
auto wholeImageCtx = capture_result["image"];
pbio::context_utils::putImage(wholeImageCtx, input_rawimg);

2.5.1 For each detected face create a Context-container faceData and pass a container with an image using "image_ptr" key:

for(auto &sample: samples)
{
auto faceData = service->createContext();
faceData["image_ptr"] = capture_result["image"];

2.5.2 Convert a RawSample object to the Context-container faceData:

    pbio::context_utils::putRawSample(faceData, sample, "fda", input_rawimg.width, input_rawimg.height);

2.5.3 Call qualityAssessmentEstimator() and pass a Context-container faceData that contains the image:

    qualityAssessmentEstimator(faceData);

2.5.4 Move block processing result to the container ioData:

    objectsCtx.push_back(std::move(faceData));
}
ioData = std::move(capture_result);

Accurate estimation requires only one person's face in the frame, looking at the camera, otherwise the overall score will be low, since the algorithm takes into account the relative size, position and directionality. If multiple faces are captured, each will be processed independently.

The result of calling qualityAssessmentEstimator() will be appended to ioData container.

Output data format is given as context with the keys "quality":"qaa".

The "quality":"qaa" keys contain a Context with a full set of estimations score

  • "totalScore" key contains an overall score of type long in a range of [0,100]
/*
{
"quality": {
"qaa": {
"totalScore": {"type": "long", "minimum": 0, "maximum": 100},
"isSharp": {"type": "boolean"},
"sharpnessScore": {"type": "long", "minimum": 0, "maximum": 100},
"isEvenlyIlluminated": {"type": "boolean"},
"illuminationScore": {"type": "long", "minimum": 0, "maximum": 100},
"noFlare": {"type": "boolean"},
"isLeftEyeOpened": {"type": "boolean"},
"leftEyeOpennessScore": {"type": "long", "minimum": 0, "maximum": 100},
"isRightEyeOpened": {"type": "boolean"},
"rightEyeOpennessScore": {"type": "long", "minimum": 0, "maximum": 100},
"isRotationAcceptable": {"type": "boolean"},
"maxRotationDeviation": {"type": "long"},
"notMasked": {"type": "boolean"},
"notMaskedScore": {"type": "long", "minimum": 0, "maximum": 100},
"isNeutralEmotion": {"type": "boolean"},
"neutralEmotionScore": {"type": "long", "minimum": 0, "maximum": 100},
"isEyesDistanceAcceptable": {"type": "boolean"},
"eyesDistance": {"type": "long", "minimum": 0}
"isMarginsAcceptable": {"type": "boolean"},
"marginOuterDeviation": {"type": "long", "minimum": 0}
"marginInnerDeviation": {"type": "long", "minimum": 0}
"isNotNoisy": {"type": "boolean"},
"noiseScore": {"type": "long", "minimum": 0, "maximum": 100},
"watermarkScore": {"type": "long", "minimum": 0},
"hasWatermark": {"type": "boolean"},
"dynamicRangeScore": {"type": "long", "minimum": 0},
"isDynamicRangeAcceptable": {"type": "boolean"},
"isBackgroundUniform": {"type": "boolean"},
"backgroundUniformityScore": {"type": "long", "minimum": 0, "maximum": 100}
}
}
}
*/

3. GPU Acceleration

Quality Assessment doesn't support GPU acceleration by itself, but the involved modules, listed in the config file, can have their own acceleration capabilities.