Skip to main content
Version: 3.19.0

Processing Block API

Processing Block API is an alternative, scalable interface that replaces existing APIs for easier integration of SDK capabilities into your application.

Key features

  • Multiple components combined into a single integration
  • Simplicity and ease of learning
  • Rapid implementation
  • Long-term support and updates

Processing Block API is a part of the upcoming 3DiVi solutions. For more details, contact your 3DiVi Sales representative.

Block types, modifications and available versions

note

You can find Processing Block API usage examples below:

Block typeDescriptionModificationVersion
FACE_DETECTOR It is used to detect human faces on an image. The detection result is a bounding rectangle (a frame) around the detected face.ssyv[1, 3]
uld[1]
blf_back[1]
blf_front[1]
HUMAN_BODY_DETECTOR It is used to detect human bodies on an image. The detection result is a bounding rectangle (a frame) around the detected body.ssyv[1]
OBJECT_DETECTOR It is used to detect multiple objects on an image. The detection result is a bounding rectangle (a frame) around the detected object with classification name.ssyx[1]
HUMAN_POSE_ESTIMATOR It is used to estimate human body skeleton keypoints on an image. The detection result is a list of keypoints with their coordinates and confidence score of the detected human body.heavy[1]
EMOTION_ESTIMATOR It is used to estimate human emotions from the cropped face image. The estimation result is a confidence for every emotion estimated.heavy[1]
AGE_ESTIMATOR It is used to estimate a human age from the cropped image with a face. The estimation result is a human age.heavy[1, 2]
light[1, 2]
GENDER_ESTIMATOR It is used to estimate a human gender from the cropped image with a face. The estimation result is a verdict about gender identity.heavy[1, 2]
light[1, 2]
MASK_ESTIMATOR It is used to estimate the presence of a medical mask on the cropped face image. The estimation result is a verdict about the presence of a mask.light[1, 2]
LIVENESS_ESTIMATOR It is used to estimate human liveness on a single colored image. The detection result is a bounding rectangle (a frame) around the detected human face with liveness verdict and score.[1]
It is used to estimate human liveness on a single colored image. The estimation result is a verdict about liveness and score.v4[1]
QUALITY_ASSESSMENT_ESTIMATOR It is used to assess the quality of a face in an image for identification tasks on a single colored image. The estimation result detailed quality analysisassessment[1]
estimation[1]
FACE_FITTER It is used to calculate the key points of a human face in an image. The result of the detection is a list of key points with their coordinates.tddfa_faster[1]
tddfa[1]
mesh[1]
FACE_RECOGNIZER It is used to calculate patterns of a human face in an image. The result of the detection is a human face pattern.1000[12]
MATCHER_MODULE It is used to compare patterns of human faces. The result is a verdict of similarity and distance between templates.[1]
note
  • The first modification value for the block is the default value.
  • The minimum version value for modification is the default value.

Context

Processing Block API is based on the use of Context.

Context is a heterogeneous container that consists of a set of hierarchically organized data presented in the form of key–value pairs. The closest analogue of Context is a JSON object. Each Context object can contain a scalar object (integer, real, boolean, string), a memory area or pointer, a sequential array of Context objects, or an associative container of string-Context pairs, with unlimited nesting.

How to create and use a Context object

  1. Create a FacerecService.

  2. Then create a Context container:

auto array_elem0 = service->createContext();
  1. Common set of operations with a Context container:
  • creating an associative container by calling ["key"] on empty Context:
    array_elem0["name"] = "Julius Zeleny";      // put string
array_elem0["phone"] = 11111111111l; // put integer (long)
array_elem0["social_score"] = 0.999; // put double
array_elem0["verified"] = true; // put bool
  • getters:
    ASSERT_EQ( array_elem0["name"].getString(), "Julius Zeleny" );
ASSERT_EQ( array_elem0["phone"].getLong(), 11111111111l );
ASSERT_EQ( array_elem0["social_score"].getDouble(), 0.999 );
ASSERT_EQ( array_elem0["verified"].getBool(), true );
  • creating a sequence array by calling push_back on empty Context:
    auto service->createContext();
array.push_back(array_elem0);
  • iterating over array:

// get by index
ASSERT_EQ( array[0]["phone"].getLong(), 11111111111l );

// iterate with index
size_t array_sz = array.size();
for(size_t i = 0; i < array_sz; ++i)
array[i]["phone"];

// or with iterators
for(auto iter = array.begin(); iter != array.end(); ++iter)
(*iter)["phone"]; // deference returns nested Context

// with foreach
for(auto val : array)
val["phone"];
  • operations with a nested associative container:
    auto full = service->createContext();
full["friends"] = std::move(array); // move assignment without copying

// access nested object
ASSERT_EQ( full["friends"][0]["social_score"].getDouble(), 0.999 );

// iterate over associative containers values
for(auto iter = full.begin(); iter != full.end(); ++iter) {
iter.key(); // get key value from iterator
(*iter)[0]["social_score"].getDouble(); // get value
}

// with foreach
for(auto val : full)
val[0]["social_score"].getDouble();
  • other Context's convenient methods:
    void clear()
bool contains(const std::string& key) // for an assosiative container
Context operator[](size_t index) // for a sequence array, access specified element with bounds checking
Context operator[](const std::string& key) // for an assosiative container, access or insert
Context at(const std::string& key) // for an assosiative container, with bounds checking
size_t size() // return elements count for a container
bool isNone() // is empty
bool isArray() // is a sequence array
bool isObject() // is an assosiative container
bool isLong(), isDouble(), isString(), isBool() // check if contains a certain scalar type

Binary Image Format

Most of the processing blocks operates on Context with an image in binary format:

/*
{
"image" : { "format": "NDARRAY",
"blob": <data pointer>,
"dtype": "uint8_t",
"shape": [height, width, channels] }
}
*/

The "blob" key contains a smart pointer to data. The pointer is set by the function void Context::setDataPtr(void* ptr, int copy_sz), where copy_sz is the size of memory in Bytes, that will be copied, and then automatically released when Context object's lifetime ends. Copying will not perform if 0 is passed as argument copy_sz. In this case the Context object does not control the lifetime of the object it points to.

You can also allocate a raw memory, f.e. to copy data later, passing nullptr and size as arguments of setDataPtr.

The "dtype" can contain one of these values: "uint8_t", "int8_t", "uint16_t", "int16_t", "int32_t", "float", "double". This is according to OpenCV types: CV_8U, CV_8S, CV_16U, CV_16S, CV_32S, CV_32F, CV_64F.

Creating a Context container with RGB-image

  1. Create a FacerecService.

  2. Create a Context container for image using the createContext() method:

auto imgCtx = service->createContext();
  1. Read an RGB-image from file:
// read the image from file
std::string input_image_path = "<path_to_image>";
cv::Mat image = cv::imread(input_image_path, cv::IMREAD_COLOR);
cv::Mat input_image;
cv::cvtColor(image, input_image, cv::COLOR_BGR2RGB);
  1. a. Put an image into container OR
// using pbio::context_utils::putImage(Context& ctx, unsigned char* data, size_t height, size_t width, pbio::IRawImage::Format format, bool copy)
pbio::context_utils::putImage(imgCtx, input_image.data, input_image.rows, input_image.cols, pbio::IRawImage::FORMAT_RGB, true);

  1. b. OR copy an image from pbio::RawImage, pbio::CVRawImage, pbio::InternalImageBuffer to binary format and put it to Context container:
// constructing pbio::RawImage
pbio::RawImage input_rawimg(input_image.cols, input_image.rows, pbio::RawImage::Format::FORMAT_RGB, input_image.data);

// using void putImage(Context& ctx, const RawImage& raw_image)
pbio::context_utils::putImage(imgCtx, input_rawimg);

Creating a Processing Block

This template can be used to create any processing block. Keys unit_type and model_path must be specified according to block you want to use (see description of specific processing block).

  1. Create a FacerecService.

  2. Create a Context container:

auto configCtx = service->createContext();
  1. Define fields in the created context container for creating a Processing Block:
// mandatory, specify the name of processing block
configCtx["unit_type"] = "<name_of_processing_block>";

// if omitted, the default value will be used
configCtx["modification"] = "<modification>";

// if not specified, the first version of the modification will be used
configCtx["version"] = "<version>";

// // the default models are located in the Face SDK distribution directory: share/processing_block/<modification>/(<version>/ or <version>.enc)
// you can set your own path to the model
configCtx["model_path"] = "<path_to_model_file>";

// default location of the onnxruntime library in <FaceSDKShortProductName /> distributive: the "lib" folder for Linux platfrom or the "bin" folder for Windows platfrom
// you can specify your own path to onnxruntime library
// if value is not specified, the os-specific default search order will be used
configCtx["ONNXRuntime"]["library_path"] = "../lib"; // for Linux
configCtx["ONNXRuntime"]["library_path"] = "../bin"; // for Windows

// optional, "true" if you want to use GPU acceleration (CUDA) for processing block that support it
configCtx["use_cuda"] = false;
  1. Create a Processing Block:
pbio::ProcessingBlock processing_block = service->createProcessingBlock(configCtx);

GPU Acceleration

Processing Blocks can be used with GPU acceleration (CUDA). To activate acceleration you need to define the ["use_cuda"] key with the true value for Processing Block configuration container. To start processing blocks on cuda-10.1, it is necessary to define the key "use_legacy" with the value `true' for the Context container of the handler block (see Creating a Processing Block).

System Requirements

The requirements are given at GPU Usage page.