Skip to main content
Version: 3.26.0 (latest)

Getting started

Processing Block API is a scalable interface that replaces Legacy API for easier integration of Face SDK capabilities into your application.

Key features

  • Multiple components combined into a single integration
  • Simplicity and ease of learning
  • Rapid implementation
  • Long-term support and updates

Requirements

  • Windows x86 64-bit or Linux x86 64-bit.
  • Face SDK windows_x86_64 or linux_x86_64 is installed (see Getting Started]).

Context-container

Processing Block API is based on the use of Context. Context is a heterogeneous container that consists of a set of hierarchically organized data presented in the form of key–value pairs. The closest analogue of Context is a JSON object. Each Context object can contain a scalar object (integer, real, boolean, string), a memory area or pointer, a sequential array of Context objects, or an associative container of string-Context pairs, with unlimited nesting.

How to create and use a Context object

  1. Create a FacerecService].

  2. Create a Context-container:

auto array_elem0 = service->createContext();
  1. Common set of operations with a Context-container:
  • creating an associative container by calling ["key"] on empty Context:
array_elem0["name"] = "Julius Zeleny";      // pass string
array_elem0["phone"] = 11111111111l; // pass integer (long)
array_elem0["social_score"] = 0.999; // pass double
array_elem0["verified"] = true; // pass bool
  • getters:
ASSERT_EQ( array_elem0["name"].getString(), "Julius Zeleny" );
ASSERT_EQ( array_elem0["phone"].getLong(), 11111111111l );
ASSERT_EQ( array_elem0["social_score"].getDouble(), 0.999 );
ASSERT_EQ( array_elem0["verified"].getBool(), true );
  • creating a sequence array by calling push_back on empty Context:
auto array == service->createContext();
array.push_back(array_elem0);
  • iterating over array:

// get by index
ASSERT_EQ( array[0]["phone"].getLong(), 11111111111l );

// iterate with index
size_t array_sz = array.size();
for(size_t i = 0; i < array_sz; ++i)
array[i]["phone"];

// or with iterators
for(auto iter = array.begin(); iter != array.end(); ++iter)
(*iter)["phone"]; // deference returns nested Context

// or с foreach
for(auto val : array)
val["phone"];
  • operations with a nested associative container:
auto full = service->createContext();
full["friends"] = std::move(array); // move assignment without copying

// access to the nested object
ASSERT_EQ( full["friends"][0]["social_score"].getDouble(), 0.999 );

// iterate over associative containers values
for(auto iter = full.begin(); iter != full.end(); ++iter) {
iter.key(); // get the key value from iterator
(*iter)[0]["social_score"].getDouble(); // get the value
}

// с foreach
for(auto val : full)
val[0]["social_score"].getDouble();
  • other Context's convenient methods:
void clear()
bool contains(const std::string& key); // for an assosiative container
Context operator[](size_t index); // for a sequence array, access specified element with bounds checking
Context operator[](const std::string& key); // for an assosiative container, access or insert
Context at(const std::string& key); // for an assosiative container, with bounds checking
size_t size(); // returned elements count for a container
bool isNone(); // is empty
bool isArray(); // is a sequence array
bool isObject(); // is an assosiative container
bool isLong(), isDouble(), isString(), isBool(); // check if contains a certain scalar type
  • FacerecService methods connected with Context:
// get Context from image file
pbio::Context createContextFromEncodedImage(const uint8_t* data, uint64_t dataSize);
pbio::Context createContextFromEncodedImage(const std::vector<uint8_t>& data);
pbio::Context createContextFromEncodedImage(const std::string& data);
pbio::Context createContextFromEncodedImage(const std::vector<char>& data);

// get Context from image bytes
pbio::Context createContextFromFrame(uint8_t* data, int32_t width, int32_t height, pbio::Context::Format format, int32_t baseAngle);

Binary image format

Most of the processing blocks operate on Context with an image in binary format:

{
"image" : { "format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [height, width, channels] }
}

The "blob" key contains a smart pointer to data. The pointer is set by the function void Context::setDataPtr(void* ptr, int copy_sz), where copy_sz is the size of memory in Bytes, that will be copied, and then automatically released when Context objects lifetime ends.

Copying will not perform if 0 is passed as argument copy_sz. In this case the Context object does not control the lifetime of the object it points to. You can also allocate a raw memory, f.e. to copy data later, passing nullptr and size as arguments of setDataPtr.

The "dtype" can contain one of these values: "uint8_t", "int8_t", "uint16_t", "int16_t", "int32_t", "float", "double". This is according to OpenCV types: CV_8U, CV_8S, CV_16U, CV_16S, CV_32S, CV_32F, CV_64F.

Create a Context-container with RGB-image

  1. Create a FacerecService].
  1. Read an image from the file:
std::string inputImagePath = "{path_to_image}";
std::ifstream imageFile(inputImagePath, std::ios::binary);
std::istreambuf_iterator<char> start(file);
std::vector<char> imageData(start, std::istreambuf_iterator<char>());

3 Create a Context container with an image using the createContextFromEncodedImage() method

pbio::Context ioData = service->createContextFromEncodedImage(imageData);

Processing Blocks

Processing Blocks types

  • FACE_DETECTOR
  • HUMAN_BODY_DETECTOR
  • HUMAN_POSE_ESTIMATOR
  • OBJECT_DETECTOR
  • FACE_FITTER
  • EMOTION_ESTIMATOR
  • AGE_ESTIMATOR
  • GENDER_ESTIMATOR
  • MASK_ESTIMATOR
  • GLASSES_ESTIMATOR
  • LIVENESS_ESTIMATOR
  • QUALITY_ASSESSMENT_ESTIMATOR
  • FACE_TEMPLATE_EXTRACTOR
  • TEMPLATE_INDEX
  • MATCHER_MODULE
  • VERIFICATION_MODULE
note

Examples of using Processing Block API are demonstrated in:

Processing Block parameters

  • unit_type: string — main parameter of the processing block, defines the type of the created module.
  • modification: string — optional parameter, defines modification of the processing block. If not specified, the default value will be used.
  • version: int64 — optional parameter, defines the version of modification of the processing block. If not specified, the default value will be used.
  • model_path: string — optional parameter, defines the path to the processing block model. If not specified, the default value will be used.
  • use_cuda: bool — optional parameter, responsible for starting the processing block on GPU. The default value is false.
  • device_id: int64 - optional parameter, specifies GPU to use. The default value is 0.
  • use_legacy: bool — optional parameter, needed to use older onnxruntime library. The default value is false.
  • ONNXRuntime — key for onnxruntime configuration parameters.
    • library_path: string — path to the onnxruntime libraries, by default the path to the libfacerec.so directory.
    • intra_op_num_threads: int64 — number of threads for paralleling the module, the default value is 1.

Processing Block usage

  1. Create a Context-container, specify the parameters you need and pass it to the FacerecService.createProcessingBlock() method.

    // mandatory, specify the name of processing block
    auto configCtx = service->createContext();
    configCtx["unit_type"] = "<name_of_processing_block>";

    // if omitted, the default value will be used
    configCtx["modification"] = "<modification>";

    // if not specified, the first version of the modification will be used
    configCtx["version"] = <version>;

    // the default models are located in the Face SDK distribution directory: share/processing_block/<modification>/(<version>/ or <version>.enc)
    // you can set your own path to the model
    configCtx["model_path"] = "<path_to_model_file>";

    // default location of the onnxruntime library in <FaceSDKShortProductName /> distribution: the "lib" folder for Linux platfrom or the "bin" folder for Windows platfrom
    // you can specify your own path to onnxruntime library
    // if value is not specified, the os-specific default search order will be used
    configCtx["ONNXRuntime"]["library_path"] = "../lib"; // for Linux
    configCtx["ONNXRuntime"]["library_path"] = "../bin"; // for Windows

    // optional, "true" if you want to use GPU acceleration (CUDA) for processing block that support it
    configCtx["use_cuda"] = false;
    pbio::ProcessingBlock processing_block = service->createProcessingBlock(configCtx);
  2. Prepare input Context and pass it to processing block

    std::string inputImagePath = "{path_to_image}";
    std::ifstream imageFile(inputImagePath, std::ios::binary);
    std::istreambuf_iterator<char> start(file);
    std::vector<char> imageData(start, std::istreambuf_iterator<char>());

    // creating a Context container with a binary image
    pbio::Context ioData = service->createContextFromEncodedImage(imageData);

    // Processing Block call
    processing_block(ioData);

GPU acceleration

Processing Blocks can be used with GPU acceleration (CUDA). To activate acceleration you need to define the ["use_cuda"] key with the true value for Processing Block configuration container. To start processing blocks on cuda-10.1, it is necessary to define the key "use_legacy" with the value true for the Context container of the Processing Block. The system requirements are available here].