Getting Started
Processing Block API is an alternative, scalable interface that replaces existing APIs for easier integration of SDK capabilities into your application.
Requirements
- Windows x86 64-bit or Linux x86 64-bit.
- Face SDK windows_x86_64 or linux_x86_64 is installed (see Getting Started).
Key features
- Multiple components combined into a single integration
- Simplicity and ease of learning
- Rapid implementation
- Long-term support and updates
Processing Block API is a part of the upcoming 3DiVi solutions. For more details, contact your 3DiVi Sales representative.
Block types, modifications and available versions
You can find Processing Block API usage examples below:
- Examples examples/python/processing_blocks/ in Python
Block type | Description | Modification | Version |
---|---|---|---|
FACE_DETECTOR | It is used to detect human faces on an image. The detection result is a bounding rectangle (a frame) around the detected face. | ssyv | [1, 3] |
uld | [1] | ||
blf_back | [1] | ||
blf_front | [1] | ||
HUMAN_BODY_DETECTOR | It is used to detect human bodies on an image. The detection result is a bounding rectangle (a frame) around the detected body. | ssyv | [1] |
OBJECT_DETECTOR | It is used to detect multiple objects on an image. The detection result is a bounding rectangle (a frame) around the detected object with classification name. | ssyx | [1] |
HUMAN_POSE_ESTIMATOR | It is used to estimate human body skeleton keypoints on an image. The detection result is a list of keypoints with their coordinates and confidence score of the detected human body. | heavy | [1] |
EMOTION_ESTIMATOR | It is used to estimate human emotions from the cropped face image. The estimation result is a confidence for every emotion estimated. | heavy | [1] |
AGE_ESTIMATOR | It is used to estimate a human age from the cropped image with a face. The estimation result is a human age. | heavy | [1, 2] |
light | [1, 2] | ||
GENDER_ESTIMATOR | It is used to estimate a human gender from the cropped image with a face. The estimation result is a verdict about gender identity. | heavy | [1, 2] |
light | [1, 2] | ||
MASK_ESTIMATOR | It is used to estimate the presence of a medical mask on the cropped face image. The estimation result is a verdict about the presence of a mask. | light | [1, 2] |
LIVENESS_ESTIMATOR | It is used to estimate human liveness on a single colored image. The detection result is a bounding rectangle (a frame) around the detected human face with liveness verdict and score. | [1] | |
It is used to estimate human liveness on a single colored image. The estimation result is a verdict about liveness and score. | v4 | [1] | |
QUALITY_ASSESSMENT_ESTIMATOR | It is used to assess the quality of a face in an image for identification tasks on a single colored image. The estimation result detailed quality analysis | assessment | [1] |
estimation | [1] | ||
FACE_FITTER | It is used to calculate the key points of a human face in an image. The result of the detection is a list of key points with their coordinates. | tddfa_faster | [1] |
tddfa | [1] | ||
mesh | [1] | ||
FACE_RECOGNIZER | It is used to calculate patterns of a human face in an image. The result of the detection is a human face pattern. | 1000 | [12] |
MATCHER_MODULE | It is used to compare patterns of human faces. The result is a verdict of similarity and distance between templates. | [1] |
- The first modification value for the block is the default value.
- The minimum version value for modification is the default value.
Context
Processing Block API is based on the use of Context.
Context is a heterogeneous container that consists of a set of hierarchically organized data presented in the form of key–value pairs. The closest analogue of Context is a JSON object. Each Context object can contain a scalar object (integer, real, boolean, string), a memory area or pointer, a sequential array of Context objects, or an associative container of string-Context pairs, with unlimited nesting.
How to create and use a Context object
Then create a Context container:
- C++
- Python
- Flutter
auto array_elem0 = service->createContext();
array_elem0 = service.createContext({})
Context array_elem0 = service.createContext({});
- Common set of operations with a Context container:
- creating an associative container by calling
["key"]
on empty Context:
- C++
- Python
- Flutter
array_elem0["name"] = "Julius Zeleny"; // pass string
array_elem0["phone"] = 11111111111l; // pass integer (long)
array_elem0["social_score"] = 0.999; // pass double
array_elem0["verified"] = true; // pass bool
array_elem0["name"] = "Julius Zeleny" # pass string
array_elem0["phone"] = 11111111111 # pass integer
array_elem0["social_score"] = 0.999 # pass double
array_elem0["verified"] = True # pass bool
array_elem0["name"] = "Julius Zeleny"; // pass string
array_elem0["phone"] = 11111111111; // pass integer
array_elem0["social_score"] = 0.999; // pass double
array_elem0["verified"] = true; // pass bool
- getters:
- C++
- Python
- Flutter
ASSERT_EQ( array_elem0["name"].getString(), "Julius Zeleny" );
ASSERT_EQ( array_elem0["phone"].getLong(), 11111111111l );
ASSERT_EQ( array_elem0["social_score"].getDouble(), 0.999 );
ASSERT_EQ( array_elem0["verified"].getBool(), true );
assert array_elem0["name"].get_value(), "Julius Zeleny"
assert array_elem0["phone"].get_value() == 11111111111
assert array_elem0["social_score"].get_value() == 0.999
assert array_elem0["verified"].get_value() == True
assert (array_elem0["name"].get_value() == "Julius Zeleny");
assert (array_elem0["phone"].get_value() == 11111111111);
assert (array_elem0["social_score"].get_value() == 0.999);
assert (array_elem0["verified"].get_value() == true);
- creating a sequence array by calling
push_back
on empty Context:
- C++
- Python
- Flutter
auto array == service->createContext();
array.push_back(array_elem0);
array = service.create_context([])
array.push_back(array_elem0)
Context array = service.createContext([]);
array.pushBack(array_elem0);
- iterating over array:
- C++
- Python
- Flutter
// get by index
ASSERT_EQ( array[0]["phone"].getLong(), 11111111111l );
// iterate with index
size_t array_sz = array.size();
for(size_t i = 0; i < array_sz; ++i)
array[i]["phone"];
// or with iterators
for(auto iter = array.begin(); iter != array.end(); ++iter)
(*iter)["phone"]; // deference returns nested Context
// or с foreach
for(auto val : array)
val["phone"];
# get by index
assert array[0]["phone"].get_value() == 11111111111
# iterate with index
for i in range(len(array)):
array[i]["phone"]
# or with iterators
for elem in array:
elem["phone"]
// get by index
assert (array[0]["phone"].get_value() == 11111111111);
// iterate with index
for (int i = 0; i < array.len(); i++)
array[i]["phone"];
- operations with a nested associative container:
- C++
- Python
- Flutter
auto full = service->createContext();
full["friends"] = std::move(array); // move assignment without copying
// access to the nested object
ASSERT_EQ( full["friends"][0]["social_score"].getDouble(), 0.999 );
// iterate over associative containers values
for(auto iter = full.begin(); iter != full.end(); ++iter) {
iter.key(); // get the key value from iterator
(*iter)[0]["social_score"].getDouble(); // get the value
}
// с foreach
for(auto val : full)
val[0]["social_score"].getDouble();
full = service.create_context()
full["friends"] = array.to_dict()
# access to the nested object
assert full["friends"][0]["social_score"].get_value() == 0.999
# iterate over associative containers values
for key in full.keys():
full[key][0]["social_score"].get_value()
Context full = service.createContext({});
full["friends"] = array.toMap();
// access to the nested object
assert (full["friends"][0]["social_score"].get_value() == 0.999);
// iterate over associative containers values
for (var key in full.getKeys())
full[key][0]["social_score"].get_value();
- other Context's convenient methods:
- C++
- Python
- Flutter
void clear()
bool contains(const std::string& key) // for an assosiative container
Context operator[](size_t index) // for a sequence array, access specified element with bounds checking
Context operator[](const std::string& key) // for an assosiative container, access or insert
Context at(const std::string& key) // for an assosiative container, with bounds checking
size_t size() // returned elements count for a container
bool isNone() // is empty
bool isArray() // is a sequence array
bool isObject() // is an assosiative container
bool isLong(), isDouble(), isString(), isBool() // check if contains a certain scalar type
def to_dict(self) -> dict # converts context to dictionary
def is_none(self) -> bool # is empty
def is_array(self) -> bool # check for sequential array
def is_object(self) -> bool # check for associative container
def is_long, is_double, is_string, is_bool -> bool # check for a scalar data type
Context operator[](int index) // for a sequence array, access specified element with bounds checking
Context operator[](String key) // for an assosiative container, access or insert
int len() // returned elements count for a container
bool is_none() // is empty
bool is_array() // is a sequence array
bool is_object() // is an assosiative container
bool is_long(), is_double(), is_string(), is_bool() // check if contains a certain scalar type
Binary Image Format
Most of the processing blocks operates on Context with an image in binary format:
/*
{
"image" : { "format": "NDARRAY",
"blob": <data pointer>,
"dtype": "uint8_t",
"shape": [height, width, channels] }
}
*/
The "blob"
key contains a smart pointer to data. The pointer is set by the function void Context::setDataPtr(void* ptr, int copy_sz)
,
where copy_sz
is the size of memory in Bytes, that will be copied, and
then automatically released when Context object's lifetime ends. Copying will not perform if 0
is passed as argument copy_sz
.
In this case the Context object does not control the lifetime of the object it points to.
You can also allocate a raw memory, f.e. to copy data later, passing nullptr and size as arguments of setDataPtr
.
The "dtype"
can contain one of these values: "uint8_t"
, "int8_t"
, "uint16_t"
, "int16_t"
, "int32_t"
, "float"
, "double"
.
This is according to OpenCV types: CV_8U
, CV_8S
, CV_16U
, CV_16S
, CV_32S
, CV_32F
, CV_64F
.
Creating a Context container with RGB-image
- C++
- Python
- Flutter
- Create a Context container for image using the
createContext()
method:
auto imgCtx = service->createContext();
- Read an RGB-image from file:
// read the image from file
std::string input_image_path = "<path_to_image>";
cv::Mat image = cv::imread(input_image_path, cv::IMREAD_COLOR);
cv::Mat input_image;
cv::cvtColor(image, input_image, cv::COLOR_BGR2RGB);
- a. Put an image into container OR
// using pbio::context_utils::putImage(Context& ctx, unsigned char* data, size_t height, size_t width, pbio::IRawImage::Format format, bool copy)
pbio::context_utils::putImage(imgCtx, input_image.data, input_image.rows, input_image.cols, pbio::IRawImage::FORMAT_RGB, true);
- b. OR copy an image from
pbio::RawImage
,pbio::CVRawImage
,pbio::InternalImageBuffer
to binary format and put it to Context container:
// constructing pbio::RawImage
pbio::RawImage input_rawimg(input_image.cols, input_image.rows, pbio::RawImage::Format::FORMAT_RGB, input_image.data);
// using void putImage(Context& ctx, const RawImage& raw_image)
pbio::context_utils::putImage(imgCtx, input_rawimg);
- Read RGB-image from the file:
input_image_path = "<path_to_image>"
image = cv2.imread(input_image_path, cv2.IMREAD_COLOR)
input_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
- Create a dictionary in binary image format:
imgDict = {
"blob": input_image.tobytes(),
"dtype": "uint8_t",
"format": "NDARRAY",
"shape": [dim for dim in input_image.shape]
}
- Create a Context container for the image using the
createContext()
method:
imgCtx = service.create_context(imgDict)
- Read RGB-image from the file:
File file = File(<"imagePath">);
final Uint8List bytes = await file.readAsBytes();
final ImageDescriptor descriptor = await ImageDescriptor.encoded(await ImmutableBuffer.fromUint8List(bytes));
- Create a dictionary in binary image format:
Map<String, dynamic> imageContext = {
"blob": bytes,
"dtype": "uint8_t",
"format": "NDARRAY",
"shape": [descriptor.height, descriptor.width, 3]
};
- Create a Context container for the image using the
createContext()
method:
imgCtx = service.createContext(imageContext);
Creating a Processing Block
This template can be used to create any processing block. Keys unit_type
and model_path
must be specified according
to block you want to use (see description of specific processing block).
- C++
- Python
- Flutter
- Create a Context container:
auto configCtx = service->createContext();
- Define fields in the created context container for creating a Processing Block:
// mandatory, specify the name of processing block
configCtx["unit_type"] = "<name_of_processing_block>";
// if omitted, the default value will be used
configCtx["modification"] = "<modification>";
// if not specified, the first version of the modification will be used
configCtx["version"] = "<version>";
// the default models are located in the Face SDK distribution directory: share/processing_block/<modification>/(<version>/ or <version>.enc)
// you can set your own path to the model
configCtx["model_path"] = "<path_to_model_file>";
// default location of the onnxruntime library in <FaceSDKShortProductName /> distribution: the "lib" folder for Linux platfrom or the "bin" folder for Windows platfrom
// you can specify your own path to onnxruntime library
// if value is not specified, the os-specific default search order will be used
configCtx["ONNXRuntime"]["library_path"] = "../lib"; // for Linux
configCtx["ONNXRuntime"]["library_path"] = "../bin"; // for Windows
// optional, "true" if you want to use GPU acceleration (CUDA) for processing block that support it
configCtx["use_cuda"] = false;
- Create a Processing Block:
pbio::ProcessingBlock processing_block = service->createProcessingBlock(configCtx);
- Create Dict container:
configDict = {};
- Define the values of the Context container keys specific to the selected processing block:
# mandatory, specify the name of processing block
configDict["unit_type"] = "<name_of_processing_block>"
# if omitted, the default value will be used
configDict["modification"] = "<modification>"
# if not specified, the first version of the modification will be used
configDict["version"] = "<version>"
# the default models are located in the Face SDK distribution directory: share/processing_block/<modification>/(<version>/ or <version>.enc)
# you can set your own path to the model
configDict["model_path"] = "<path_to_model_file>"
# default location of the onnxruntime library in <FaceSDKShortProductName /> distribution: the "lib" folder for Linux platfrom or the "bin" folder for Windows platfrom
# you can specify your own path to onnxruntime library
# if value is not specified, the os-specific default search order will be used
configDict["ONNXRuntime"]["library_path"] = "../lib" # for Linux
configDict["ONNXRuntime"]["library_path"] = "../bin" # for Windows
# optional, "true" if you want to use GPU acceleration (CUDA) for processing block that support it
configDict["use_cuda"] = False
- Create a Processing Block:
processing_block = service.create_processing_block(configDict);
- Create Map container:
Map<String, dynamic> configMap = {};
- Define the values of the Context container keys specific to the selected processing block:
// mandatory, specify the name of processing block
configMap["unit_type"] = "<name_of_processing_block>";
// if omitted, the default value will be used
configMap["modification"] = "<modification>";
// if not specified, the first version of the modification will be used
configMap["version"] = "<version>";
// the default models are located in the Face SDK distribution directory: share/processing_block/<modification>/(<version>/ or <version>.enc)
// you can set your own path to the model
configMap["model_path"] = "<path_to_model_file>";
- Create a Processing Block:
processing_block = service.createProcessingBlock(configMap);
GPU Acceleration
Processing Blocks can be used with GPU acceleration (CUDA). To activate acceleration you need to define the
["use_cuda"]
key with the true
value for Processing Block configuration container. To start processing blocks on cuda-10.1, it is necessary to define the key "use_legacy"
with the value `true' for the Context container of the handler block
(see Creating a Processing Block).
The system requirements are available here.