Skip to main content
Version: 3.22.0 (latest)

Facial recognition

Overview

Face SDK allows performing the following comparison operations with face biometric templates:

  • Verification (1:1): comparison of two face templates for belonging to the same person (comparison of two faces);
  • Identification (1:N): comparison of one face template with other face templates (face search in the face database).

The recognition result is an estimate of similarity between the compared templates.

Facial recognition methods

Face SDK includes several recognition methods (algorithms) that differ in recognition accuracy and speed.

Method numbering system

The first number in method name indicates the method version. The higher the version, the higher the method accuracy.

The second number indicates the approximate template creation time in milliseconds on a modern x86 processor with a frequency of ~3GHz. The slower the method, the higher its accuracy.

  • x.1000 and x.300: These methods provide the highest accuracy, but they are also the slowest ones. Use case: face recognition in expert systems on large databases (more than 1 million faces).
  • x.100: These methods are faster compared to the x.1000 and x.300 methods. Use case: real-time face recognition in a video stream on desktop/server platforms (x86). These methods can also be used to recognize faces in a video stream on modern mobile platforms (arm) with at least 4 cores and a core frequency of at least 1.6GHz.
  • x.50 and x.30: The fastest methods. Use case: face recognition in a video stream on mobile platforms (arm).
Deprecated method numbering system (for Face SDK up to version 3.3)

The first number (6.x/7.x/8.x) indicates the method accuracy/speed:

  • 7.x: These methods provide the highest accuracy, but they are also the slowest ones. Use case: face recognition in expert systems on large databases (More than 1 million faces).
  • 6.x: These methods are faster compared to the 7.x methods. Use case: real-time face recognition in a video stream on desktop/server platforms (x86). These methods can also be used to recognize faces in a video stream on modern mobile platforms (arm) with at least 4 cores and a core frequency of at least 1.6GHz.
  • 8.x: The fastest methods. Use case: face recognition in a video stream on mobile platforms (arm).

The second number indicates the method version. Later versions provide better quality. You can switch to them, when you update Face SDK from the corresponding earlier versions. For example, if the 6.6 method was used, we recommend that you use its new version 6.7 when upgrading Face SDK. The template size and template creation speed remain the same, but the recognition accuracy is higher.

In some cases, the latest Face SDK version may contain updates not for all groups of recognition methods. In this case you can either use the old recognition method, or compare its accuracy with the new methods from a faster group, and switch to the new faster methods if you see accuracy improvement. In addition, we provide recommendations on using the specific recognition methods when upgrading to a newer Face SDK version.

Methods by use cases depending on Face SDK version

Use case/SDK version Expert Systems Desktop/Server Platforms Mobile Platforms
SDK-2.0 method7 method6v2
SDK-2.1 method7 method6v3
SDK-2.2 method7v2 method6v4
SDK-2.3 method7v3 method6v5
SDK-2.5 method7v6 method6v6
SDK-3.0 method7v6 method6v6 method8v6
SDK-3.1 method7v7 method6v7 method8v7
SDK-3.3 method9v300
method9v1000
method6v7 method9v30
SDK-3.4 method9v300
method9v1000
method9v300mask
method9v1000mask
method6v7 method9v30
method9v30mask
SDK-3.11 method9v300
method10v1000
method9v300mask
method9v1000mask
method10v100 method10v30
method9v30mask
SDK-3.13 method9v300
method11v1000
method9v300mask
method9v1000mask
method10v100 method10v30
method9v30mask
SDK-3.17 method9v300
method11v1000
method9v300mask
method9v1000mask
method12v1000
method10v100
method12v100
method10v30
method9v30mask
method12v30
method12v50

Method acceleration

Methods can be accelerated by using AVX2 (only available on Linux x86 64-bit). To use the AVX2 instruction set, move the contents of the lib/tensorflow_avx2 directory to the lib directory and define the use_avx2 parameter in the configuration file. You can check the available instructions by executing the grep flags /proc/cpuinfo command.

note

Enabling avx2 is possible on all methods except 12.x, it is automatically enabled on them.

Facial recognition

Creating Recognizer object

To recognize faces, create Recognizer object by calling the FacerecService.createRecognizer method with the recognizer configuration file as an argument.

All available configuration files are stored in the conf folder of the Face SDK distribution. The recognition method used is specified in the name of the configuration file, for example: method12v30_recognizer.xml - method 12.30. The configuration files of the latest method version can be found by the word "latest" in their names, for example: recognizer_latest_v30.xml.

For recognition of masked faces, you'd better use recognizer configuration files, which names include the word "mask", for example: method9v300mask_recognizer.xml. These methods provide higher recognition accuracy for masked faces. For example, the standard 9v1000 method provides TAR=0.72 for masked faces, and the optimized 9v1000mask method provides TAR=0.85 at FAR=1E-6.

Before creating Recognizer you can set values for the parameters below in recognizer configuration file or using Config.overrideParameter method of FacerecService.Config object:

  • num_threads (integer) is number of threads which the model will run on (for all methods).
  • inter_op_num_threads(integer) is number of threads used to parallelize model nodes (only for methods 12.x). The parameter works only in execution_mode (integer, 1 - enabled, 0 - disabled).
  • use_avx2 - enables use of avx2 on cpu to speed up template creation. (except 12.x methods, linux-x86-64 only)

Note: You can learn how to detect and recognize masked faces in our tutorial.

Extracting a template

To extract a biometric template, use Recognizer.processing method, pass RawSample object as an argument. The result is Template - interface object that stores a face template.

With the Template object you can do the following:

  • Get a method name (Template.getMethodName).
  • Save (serialize) a template in a binary stream (Template.save) to your disk.

The biometric template is created in two variations: short (64 bytes) and long (512 bytes). For identification you can enable the search for short templates, which will reduce the search time. In this case, the system will find a match with short templates first, and then confirm it with a match with a long template.

Face verification (1:1)

Note! Verification is only possible if templates are extracted using Recognizers created with the same method (specifying the same configuration file).

To compare two face templates between each other, do the following:

  1. Extract the templates from samples using Recognizer.processing method or load them with Recognizer.loadTemplate.
  2. Use Recognizer.verifyMatch method, specifying two Template objects as an argument.

Verification example (1:1)

// create face samples
const pbio::RawSample::Ptr sample1 = capturer->capture(image1);
const pbio::RawSample::Ptr sample2 = capturer->capture(image2);

// create templates
const pbio::Template::Ptr template1 = recognizer->processing(sample1);
const pbio::Template::Ptr template2 = recognizer->processing(sample2);

// verification
const pbio::Recognizer::MatchResult match = recognizer->verifyMatch(template1, template2)

Face identification (1:N)

Note! Identification is only possible if templates are extracted using Recognizers created with the same method (specifying the same configuration file).

To search for a template in the template database, follow the steps below:

  1. Load (Recognizer.loadTemplate) or extract (Recognizer.processing) the template you will compare with other templates

  2. Load the templates to search for matches (Recognizer.loadTemplate)

  3. To quickly search for short templates, create an object TemplatesIndex using Recognizer.createIndex method.

    With the TemplatesIndex object you can:
    • Get a method name (TemplatesIndex.getMethodName).
    • Get a number of templates in the index (TemplatesIndex.size).
    • Get a specified template from the index by its number (TemplatesIndex.at).
    • Reserve memory for search (TemplatesIndex.reserveSearchMemory).

    Note: The methods Recognizer.createIndex and TemplatesIndex.at don't copy the template data but copy only pointers to the data.

  4. Search for templates in the index using the Recognizer.search method. Pass values for the following parameters as arguments:

  • query_template: a template to be compared with the others
  • templates_index: TemplatesIndex
  • search_k_nearest: number of the nearest returned templates
  • search acceleration:
    • SEARCH_ACCELERATION_1: accelerated linear search
    • NO_SEARCH_ACCELERATION: standard linear search without acceleration

Identification example (1:N)

The example below shows the formation of a TemplateIndex based on face detections from one image. Recognizer.search returns search results for the requested template in the index in the amount of search_k_nearest.

// capture the faces
const std::vector<pbio::RawSample::Ptr> samples = capturer.capture(...);

// make templates
std::vector<pbio::Template::Ptr> templates;
for(size_t i = 0; i < samples.size(); ++i)
{
const pbio::Template::Ptr templ = recognizer.processing(*samples[i]);
templates.push_back(templ);
}

// identification
const int search_k_nearest = 1;
const pbio::TemplatesIndex::Ptr index = recognizer->createIndex(templates);
const std::vector<pbio::Recognizer::SearchResult> search_results = recognizer->search(*templates[0], *index, search_k_nearest, pbio::Recognizer::SEARCH_ACCELERATION_1);

Match result

1:1 and 1:N match results are returned in the MatchResult structure, that contains the following parameters:

  • Distance between compared template vectors. The smaller the value, the higher the confidence in correct recognition.
  • FAR is the False Acceptance Rate when the system mistakes images of different people as images of the same person.
  • FRR is the False Rejection Rate when the system mistakes two images of the same person as images of different people.
  • Score is the degree of similarity of faces from 0 (0%) to 1 (100%). A high degree of similarity means that two biometric templates belong to the same person.

You can set the FAR or FRR target value (FAR takes precedence) and the desired Score value in the method configuration file. Such a setting, for example, is required if, when switching to a new method, you need to leave the Score value from the previous method.

// Specify "1" to use new values, "0" to use default values
<fit_score>1</fit_score>

// Then specify new values for Score, FAR and FRR
<target_score>85</target_score>
<target_far>0.00000052</target_far>
<target_frr>0.1</target_frr>

You can also provide your own Score formula in terms of far, tar (1 - far) and distance value:

<fit_score>1</fit_score>
<use_custom_score_formula>1</use_custom_score_formula>
// This formula is an example and differs from the default one
<score_formula>(1-(log10(far)+8)/8)</score_formula>