Skip to main content
Version: 1.18.0

Face recognition settings

Change score threshold

The score parameter shows the degree of similarity of faces from 0 (0%) to 1 (100%). A high degree of similarity means that two biometric templates belong to the same person. The default threshold value is 0.85.

You can change the score threshold for OMNI Platform through the updateWorkspaceConfig API-request, where two threshold values are specified as arguments: activityScoreThreshold (The required score to link the activity to the profile) and notificationScoreThreshold (The required score to create notifications for the profile) (See Integrations for more details).

tip

Make sure the score values specified for OMNI Agent and OMNI Platform match. Otherwise, some of the activities generated from the transferred agent processes will not be linked to the corresponding profile, which means that notifications for such activities won't be received.

For example:

  • score specified for OMNI Agent = 0.7
  • score specified for OMNI Platform = 0.85

In this case, activities generated from Agent processes with a score value in the range [0.7, 0.85) will not be attached to the corresponding profile, and notifications for them will also not appear.

Change face recognition method

OMNI Platform uses face recognition methods from Face SDK.

The method refers to the version of face recognition model. The default method used in OMNI Platform is 12v1000. For faster work you can switch to the 12v100 method, but this will reduce recognition accuracy.

During platform installation

At the stage of filling out the configuration files, open the ./cfg/platform.values.yaml file and replace the method in the generic.default_template_version field. Next, open the ./cfg/image-api.values.yaml file and replace the method in the fields face-detector-template-extractor.configs.recognizer.name, template-extractor.configs.recognizer.name and verify-matcher.configs.recognizer.name .

On deployed platform

  1. Stop the platform:
./cli.sh platform uninstall
  1. Stop image-api services:
./cli.sh image-api uninstall
  1. Delete the platform database:
./cli.sh platform db-reset
  1. Open the ./cfg/platform.values.yaml file and replace the method in the generic.recognizer, backend.default_template_version, processing.recognizer_methods fields. Next, open the ./cfg/image-api.values.yaml file and replace the method in the fields face-detector-template-extractor.configs.recognizer.name and verify-matcher.configs.recognizer.name.

  2. Launch image-api services:

./cli.sh image-api install
  1. Launch the platform:
./cli.sh platform install
note

There is currently no option to switch the method for OMNI Platform with already created database. If you still decide to change the recognition method, you will need to recreate the database.

Face detection settings

ATTENTION

Customizing for specific use cases is available only for services with face detection: face-detector-face-fitter, face-detector-liveness-estimator, face-detector-template-extractor.

Image API provides face detection configurations for the following use cases:

For each use case, we've created a set of specific configuration files. The filename indicates the use case and the level of detection accuracy.

FOR EXAMPLE

The access_control_system_several_faces_q1.xml configuration file is tuned for the non-cooperative recognition in ACS and delivers remarkable q1 detection accuracy.

note

Configuration files marked with q1 provide the highest detection accuracy, while q2-marked files ensure the fastest face detection. You can explore benchmark results for all available configuration files on the separate use case pages.

Safe city

Application

Missing person search, tracking down criminals, and collecting statistics. Applied on city streets, public spaces, entertainment and shopping centers. Top priority is to never miss a wanted individual, even if it may lead to false identifications.

Use case requirements

  • Detecting the maximum number of faces in the frame
  • Dense flow of people in the frame (~1 human/m²)
  • People in the frame not facing the camera or slowing down for identification
  • Human speed in the captured frame up to 5 km/h (individuals moving at a standard walking pace)
  • Frames captured under changing lighting and weather conditions, with camera lenses subject to dirt or obstruction
  • Head rotation angle in relation to the camera lens not exceeding 40° horizontally and 20° vertically
  • Image type for detection and identification is "WILD" (according to NIST), which corresponds to QAA totalscore >= 40%
  • safety_city_q1.xml
  • safety_city_q2.xml

How to configure

  1. Open the ./cfg/image-api.values.yaml file in Image API distribution, find the capturer configuration object (path to the object: processing.services.service name.configs.capturer) and enter the same values for the fields of the capturer object in each detection service: face-detector-face-fitter, face-detector-liveness-estimator, face -detector-template-extractor.

    Example

    configs:
    capturer:
    name: safety_city_q2.xml // name of the Face SDK configuration file
  2. Open the ./cfg/platform.values.yaml file in OMNI Platform distribution, find the capturer configuration object (path to the object: generic.capturer) and enter the values for the fields of the capturer object.

    Example

    generic:
    capturer:
    name: safety_city_q2.xml // name of the Face SDK configuration file
  3. After editing the files, save them and update OMNI Platform in the cluster using the command:

    ./cli.sh image-api install
    ./cli.sh platform install

Benchmark results

Capturer configuration fileTime to detect one frame (ms)Detection accuracy (0 to 1)
safety_city_q1.xml13500.74
safety_city_q2.xml3700.685

ACS (biometric terminal)

Application

Time and attendance systems and corporate access control systems utilizing biometric terminals or cameras, primarily deployed in well-lit environments. Top priority is ensuring accurate identification without any errors.

Use case requirements

  • Frames taken by a camera installed in a room with stable lighting
  • One face in the frame, ensuring direct eye contact with the camera
  • Image type for detection and identification is "BORDER" (according to NIST), which corresponds to QAA totalscore >= 51%
  • access_control_system_one_face_q1.xml
  • access_control_system_one_face_q2.xml
  • access_control_system_one_face_q3.xml

How to configure

  1. Open the ./cfg/image-api.values.yaml file in Image API distribution, find the capturer configuration object (path to the object: processing.services.service name.configs.capturer) and enter the same values for the fields of the capturer object in each detection service: face-detector-face-fitter, face-detector-liveness-estimator, face -detector-template-extractor.

    Example

    configs:
    capturer:
    name: access_control_system_one_face_q2.xml // name of the Face SDK configuration file
  2. Open the ./cfg/platform.values.yaml file in OMNI Platform distribution, find the capturer configuration object (path to the object: generic.capturer) and enter the values for the fields of the capturer object.

    Example

    generic:
    capturer:
    name: access_control_system_one_face_q1.xml // name of the Face SDK configuration file
  3. After editing the files, save them and update OMNI Platform in the cluster using the command:

    ./cli.sh image-api install
    ./cli.sh platform install

Benchmark results

Capturer configuration fileTime to detect one frame (ms)Detection accuracy (0 to 1)
access_control_system_one_face_q1.xml700.996
access_control_system_one_face_q2.xml690.986
access_control_system_one_face_q3.xml950.98

ACS (camera)

Application

Non-corporate access control systems (such as facial recognition payment in transportation and visitor tracking in gyms) implemented using cameras without the need for specialized biometric terminals. Our main focus is on fast and accurate detection and identification, ensuring smooth and efficient entry without queues or delays. Top priority is to eliminate identification errors and provide a seamless experience for all users.

Use case requirements

  • Face detection from frames taken within a crowd, even from side angles
  • Frames captured in an indoor environment with consistent lighting
  • Up to 5-8 faces in the frame. Only the face closest to the camera is identified
  • Image type for detection and identification is "WILD" (according to NIST), which corresponds to QAA totalscore >= 40%
  • access_control_system_several_faces_q1.xml
  • access_control_system_several_faces_q2.xml

How to configure

  1. Open the ./cfg/image-api.values.yaml file in Image API distribution, find the capturer configuration object (path to the object: processing.services.service name.configs.capturer) and enter the same values for the fields of the capturer object in each detection service: face-detector-face-fitter, face-detector-liveness-estimator, face -detector-template-extractor.

    Example

    configs:
    capturer:
    name: access_control_system_several_faces_q1.xml // name of the Face SDK configuration file
  2. Open the ./cfg/platform.values.yaml file in OMNI Platform distribution, find the capturer configuration object (path to the object: generic.capturer) and enter the values for the fields of the capturer object.

    Example

    generic:
    capturer:
    name: access_control_system_several_faces_q1.xml // name of the Face SDK configuration file
  3. After editing the files, save them and update OMNI Platform in the cluster using the command:

    ./cli.sh image-api install
    ./cli.sh platform install

Benchmark results

Capturer configuration fileTime to detect one frame (ms)Detection accuracy (0 to 1)
access_control_system_several_faces_q1.xml9690.946
access_control_system_several_faces_q2.xml95 0.936

Remote identification

Application

Remote selfie-based identification and authentication using the front camera of a mobile phone in banking systems, exchanges, municipal portals, etc. Top priority is to ensure accurate identification and prevent any errors.

Use case requirements

  • One face in the frame, minimal likelihood of side views
  • Images of faces are of significant size (up to 80% of the frame area)
  • High noise tolerance in photos due to poor lighting or low-quality cameras
  • Image type for detection and identification is "BORDER" or "MUGSHOT" (according to NIST), which corresponds to QAA totalscore >= 51%
  • remote_identification_q1.xml
  • remote_identification_q2.xml

How to configure

  1. Open the ./cfg/image-api.values.yaml file in Image API distribution, find the capturer configuration object (path to the object: processing.services.service name.configs.capturer) and enter the same values for the fields of the capturer object in each detection service: face-detector-face-fitter, face-detector-liveness-estimator, face -detector-template-extractor.

    Example

    configs:
    capturer:
    name: remote_identification_q1.xml // name of the Face SDK configuration file
  2. Open the ./cfg/platform.values.yaml file in OMNI Platform distribution, find the capturer configuration object (path to the object: generic.capturer) and enter the values for the fields of the capturer object.

    Example

    generic:
    capturer:
    name: remote_identification_q1.xml // name of the Face SDK configuration file
  3. After editing the files, save them and update OMNI Platform in the cluster using the command:

    ./cli.sh image-api install
    ./cli.sh platform install

Benchmark results

Capturer configuration fileTime to detect one frame (ms)Detection accuracy (0 to 1)
remote_identification_q1.xml10400.977
remote_identification_q2.xml750.97