Skip to main content
Version: 3.22.2

Facial recognition

Introduction

Face SDK allows you to perform the following operations to compare biometric face templates:

  • Verification (1:1) — comparison of two face templates for belonging to the same person (comparison of two faces).
  • Identification (1:N) — comparison of one biometric face template with other face templates (face search through face database). The result of the recognition is a similarity score between the compared templates.

Processing Blocks for facial recognition

  • FACE_TEMPLATE_EXTRACTOR — used to build biometric face templates.
  • VERIFICATION_MODULE — used for comparison of two faces.
  • MATCHER_MODULE — used for searching faces in the database.
  • TEMPLATE_INDEX — used to create a database of biometric face templates for searching in MATCHER_MODULE module.

Modifications and versions of facial recognition blocks

Modification of the FACE_TEMPLATE_EXTRACTOR processing block determines the template generation speed and recognition accuracy. The slower the module is, the higher its recognition accuracy. Currently, the following modifications exist:

VersionTemplate creation (ms)Template size (bytes)
1 647 296
note

The default modification is "1000".

For VERIFICATION_MODULE, MATCHER_MODULE, TEMPLATE_INDEX blocks modification and version determines Template type.

Facial recognition blocks specification

Processing Block configurable parameters

Face template extractor

  1. The input Context must contain an image in binary format and objects array from Face Detector and Face Fitter:
Click here to expand the input Context specification
{
"image" : {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [height, width, channels]
},
"objects": [{
"id": {"type": "long", "minimum": 0},
"class": "face",
"confidence": {"double", "minimum": 0, "maximum": 1},
"bbox": [x1, y2, x2, y2]
"keypoints": {
"left_eye_brow_left": {"proj" : [x, y]},
"left_eye_brow_up": {"proj" : [x, y]},
"left_eye_brow_right": {"proj" : [x, y]},
"right_eye_brow_left": {"proj" : [x, y]},
"right_eye_brow_up": {"proj" : [x, y]},
"right_eye_brow_right": {"proj" : [x, y]},
"left_eye_left": {"proj" : [x, y]},
"left_eye": {"proj" : [x, y]},
"left_eye_right": {"proj" : [x, y]},
"right_eye_left": {"proj" : [x, y]},
"right_eye": {"proj" : [x, y]},
"right_eye_right": {"proj" : [x, y]},
"left_ear_bottom": {"proj" : [x, y]},
"nose_left": {"proj" : [x, y]},
"nose": {"proj" : [x, y]},
"nose_right": {"proj" : [x, y]},
"right_ear_bottom": {"proj" : [x, y]},
"mouth_left": {"proj" : [x, y]},
"mouth": {"proj" : [x, y]},
"mouth_right": {"proj" : [x, y]},
"chin": {"proj" : [x, y]},
"points": ["proj": [x, y]]
}
}]
}
  1. After calling the Estimation Processing Block, each object from the "objects" array will be added attributes corresponding to this block. Biometric template will be stored in binary sample.
Click here to expand the output Context specification
[{
"keypoints": {},
"template": {
"face_template_extractor_{modification}_{version}": {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [size]
}
}
}]

Verification module

  1. The input container Context must contain two biometric templates written in the fields "template1" and "template2". The template type must correspond to the modification of the Processing Block.
Click here to expand input Context Specification
{
"template1" : {
"face_template_extractor_{modification}_{version}": {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [size]
}
},
"template2" : {
"face_template_extractor_{modification}_{version}": {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [size]
}
}
}
  1. After the verification processing block is called, the result will be placed in "result".
Click here to expand output Context Specification
[{
"template2": {},
"result": {
"distance": {"long", "minimum": 0},
"score": {"double", "minimum": 0, "maximum": 1},
"far": {"double", "minimum": 0, "maximum": 1},
"frr": {"double", "minimum": 0, "maximum": 1},
}
}]

Template index module

  1. Input Context must contain an array of biometric templates in the `"templates" field.
Click here to expand input Context Specification
{
"templates" : [
"face_template_extractor_{modification}_{version}": {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [size]
},
]
}
  1. After calling the Template Index block, the resulting index will be placed in the "template_index" field.
Click here to expand output Context Specification
[{
"templates": {},
"template_index": {"Non-serializable"}
}]

Matcher module

  1. Input Context must contain an array "template_index", obtained after running the module "TEMPLATE_INDEX". and a set of searched biometric templates placed in the array "queries".
Click here to expand input Context Specification
{
"queries": [
"template" : {
"face_template_extractor_{modification}_{version}": {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [size]
},
}
]
"template_index": {"Non-serializable"}
}
  1. After the Matcher module is called, the result will be placed in the array "results"
Click here to expand output Context Specification
[{
"template_index": {"Non-serializable"},
"result": [{
"distance": {"long", "minimum": 0},
"score": {"double", "minimum": 0, "maximum": 1},
"far": {"double", "minimum": 0, "maximum": 1},
"frr": {"double", "minimum": 0, "maximum": 1},
}]
}]

Facial recognition results

  • distance — distance between compared template vectors. The smaller is the value, the higher is the confidence in correct recognition.
  • far — probability of erroneous confirmations when the system takes images of different people for the image of the same person.
  • frr — probability of erroneous rejections when the system mistakes two images of the same person for images of different people.
  • score — degree of similarity of faces from 0 (0%) to 1 (100%). High degree of similarity means that two biometric templates belong to the same person.

Examples of working with facial recognition blocks

Face template extraction

To obtain a face template from an image, follow these steps:

  1. Create a configuration Context container and specify the values of "unit_type" for "FACE_TEMPLATE_EXTRACTOR" and "modification" and "version" for the modification you are interested in.

    Example of creating a processing block can be found here

    Processing Block configurable parameters

  2. Pass the Context container obtained after the operation of face detection and fitter processing blocks.

  3. Call the face template extractor.

    auto configCtx = service->createContext();
    configCtx["unit_type"] = "FACE_TEMPLATE_EXTRACTOR";
    configCtx["modification"] = "{modification}";
    pbio::ProcessingBlock blockFaceExtractor = service->createProcessingBlock(configCtx);

    //------------------
    // Creating a processing block and a Context container with a binary image
    //------------------

    faceDetector(ioData)
    faceFitter(ioData)
    blockFaceExtractor(ioData);

Saving a face template

To save a face template to a file, follow these steps:

  1. Get the binary data of the face template and the size of the template.

  2. Create a file to write the binary data.

  3. Write the binary data of the face template to the file.

    auto template_ctx = ioData["objects"][0]["template"];
    size_t template_size = template_ctx["face_template_extractor_{modification}_{version}"]["shape"][0].getLong();
    uint8_t* template_ptr = template_ctx["face_template_extractor_{modification}_{version}"]["blob"].getDataPtr();

    std::ofstream out("template.bin", std::ios::binary);

    out.write(reinterpret_cast<char*>(template_ptr), template_size);

Loading a face template

To load a face template from a file, follow these steps:

  1. Open the file to read the binary data.

  2. Create a byte array and read the data from the file.

  3. Create a Context container for the face template.

    std::ifstream input("template.bin", std::ios::binary);

    uint8_t* template_ptr = new uint8_t[template_size];
    input.read(reinterpret_cast<char*>(template_ptr), template_size);

    auto template_ctx = service.createContext();
    template_ctx["face_template_extractor_{modification}_{version}"]["blob"].setDataPtr(template_ptr, template_size);
    template_ctx["face_template_extractor_{modification}_{version}"]["shape"].push_back(template_size);
    template_ctx["face_template_extractor_{modification}_{version}"]["dtype"] = "uint8_t";
    template_ctx["face_template_extractor_{modification}_{version}"]["format"] = "NDARRAY";

Face verification

  1. Create a configuration Context container and specify the values for "unit_type" as "VERIFICATION_MODULE", and specify the "modification" and "version" for the modification you are interested in.

  2. Generate a Context container according to the specification. Pass the contents of the "template" field obtained when calling "FACE_TEMPLATE_EXTRACTOR" to keys "template1" and "template2".

  3. Call the verification module.

    Context verificationConfig = service.createContext();
    verificationConfig["unit_type"] = "VERIFICATION_MODULE";
    verificationConfig["modification"] = "{modification}";

    api::ProcessingBlock verificationModule = service.createProcessingBlock(verificationConfig);

    Context verificationData = service.createContext();
    verificationData["template1"] = ioData["objects"][0]["template"];
    verificationData["template2"] = ioData2["objects"][0]["template"];

    verificationModule(verificationData);

Search for faces in the database

  1. Create a configuration Context container and specify the values of "unit_type" for "TEMPLATE_INDEX", and specify the "modification" and "version" for the modification you are interested in.

  2. Generate an input Context container according to the specification for "TEMPLATE_INDEX".

  3. Call the processing block to create the template database.

  4. Create a configuration Context container and specify the values of "unit_type" for "MATCHER_MODULE", and specify the "modification" and "version" for the modification you are interested in.

  5. Generate an input Context container according to the specification for "MATCHER_MODULE".

  6. Call the matcher module.


    Context templateIndexConfig = service.createContext();
    templateIndexConfig["unit_type"] = "TEMPLATE_INDEX";
    templateIndexConfig["modification"] = "{modification}";

    api::ProcessingBlock templateIndex = service.createProcessingBlock(templateIndexConfig);

    // Creating the input Context container for templateIndex
    Context templatesData = service.createContext();
    for (const Context& object : ioData["objects"])
    {
    templatesData["templates"].push_back(object["template"]);
    }

    templateIndex(templatesData);

    Context matcherConfig = service.createContext();
    matcherConfig["unit_type"] = "MATCHER_MODULE";
    matcherConfig["modification"] = "{modification}";

    api::ProcessingBlock matcherModule = service.createProcessingBlock(matcherConfig);

    // Forming the input Context container for matcherModule
    Context matcherData = service.createContext();
    matcherData["template_index"] = templatesData["template_index"];
    matcherData["queries"].push_back(ioData2["objects"][0]);

    matcherModule(matcherData);