2. Deployment Instructions
2.1 Preparation: Upload Images and Create a Kubernetes Cluster
2.1.1 Preparation
Download and extract OMNI Platform distribution kit to the machine used for installation. To do this, use the command below:
curl --output platform.zip link to the distribution kit
In the curl request, you need to specify a link to the platform distribution kit (zip file). A link to the folder that contains the distribution kit and accompanying documentation in pdf format will be provided in the email.
Next, move the face_sdk.lic license file (attached to the email) to the setup folder.
Contents of the distribution kit:
- ./cli.sh: entry point to run the commands.
- ./cfg: folder with OMNI Platform configuration files.
Further commands are to be executed in the system console from setup directory.
2.1.2 Upload Images
First, upload OMNI Platform images from the archive to the local registry:
./cli.sh generic load-images
Then, upload the infrastructure images from the archive to the local registry:
./cli.sh smc load-images
Uploading can last for about 5 minutes.
2.1.3 Configuration
2.1.3.1 Basic configuration
Enter Environment Variables
Open the configuration files below in a text editor, set values for the variables and save changes.
Values set in yaml files via dots denote nesting.
Configuration File | Variables |
./cfg/smc.settings.cfg |
|
./cfg/license-server.settings.cfg |
|
./cfg/platform.secrets.json |
|
./cfg/platform.values.yaml |
|
Docker Settings for GPU Usage (optionally)
Docker Configuration
To set nvidia-container-runtime as the default low-level runtime, add the following lines to the configuration file located at /etc/docker/daemon.json:
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}Applying Configuration
To apply configuration, restart docker-service using the command below:
sudo systemctl restart docker
2.1.3.2 Extended Configuration
Changing the configuration of OMNI Platform elements
The parameter values are set by default. It is recommended to change the parameter values only if necessary.
Configuration File | Variables |
./cfg/platform.values.yaml |
|
Changing facial recognition method
OMNI Platform uses facial recognition methods from Face SDK.The method refers to the version of facial recognition model. The default method used in OMNI Platform is 12v1000
. For faster work you can switch to the 12v100
method, but this will reduce recognition accuracy.
Changing the method during platform installation
At the stage of filling out the configuration files, open the ./cfg/platform.values.yaml file and replace the method in the generic.recognizer
, backend.default_template_version
, processing.recognizer_methods
fields. Next, open the ./cfg/image-api.values.yaml file and replace the method in the fields face-detector-template-extractor.configs.recognizer.name
and verify-matcher.configs.recognizer.name
.
Changing the method on a running platform
- Stop the platform
./cli.sh platform uninstall
- Stop image-api services
./cli.sh image-api uninstall
- Delete the platform database
./cli.sh platform db-reset
Open the ./cfg/platform.values.yaml file and replace the method in the
generic.recognizer
,backend.default_template_version
,processing.recognizer_methods
fields. Next, open the ./cfg/image-api.values.yaml file and replace the method in the fieldsface-detector-template-extractor.configs.recognizer.name
andverify-matcher.configs.recognizer.name
.Launch image-api services
./cli.sh image-api install
- Launch the platform
./cli.sh platform install
There is currently no option to switch the method for OMNI Platform with already created database. If you still decide to change the recognition method, you will need to recreate the database.
Setting the score threshold
The score parameter shows the degree of similarity of faces from 0 (0%) to 1 (100%). A high degree of similarity means that two biometric templates belong to the same person. The default threshold value is 0.85.
You can change the score threshold for OMNI Platform through the updateWorkspaceConfig
API-request, where two threshold values are specified as arguments: activityScoreThreshold
(The required score to link the activity to the profile) and notificationScoreThreshold
(The required score to create notifications for the profile) (See Integrations for more details).
Make sure the score values specified for OMNI Agent and OMNI Platform match. Otherwise, some of the activities generated from the transferred agent processes will not be linked to the corresponding profile, which means that notifications for such activities won't be received.
For example:
- score specified for OMNI Agent = 0.7
- score specified for OMNI Platform = 0.85
In this case, activities generated from agent processes with a score value in the range [0.7, 0.85) will not be attached to the corresponding profile, and notifications for them will also not appear.
To configure the score value for OMNI Agent, see section 5.5 in OMNI Agent User Guide.
Setting liveness
OMNI Platform contains three liveness-modules:- liveness-anti-spoofing
- quality-liveness-anti-spoofing
- face-detector-liveness-estimator
Modules liveness-anti-spoofing and face-detector-liveness-estimator use different liveness algorithms. In quality-liveness-anti-spoofing module, before calculating liveness, the image quality is additionally estimated, while the quality threshold (default value is 30) allows excluding images with insufficient quality from processing.
To change the module, specify the module name in the configuration file ./cfg/image-api.values.yaml in the processing.services.face-detector-liveness-estimator.modul
field.
2.1.4 Install and Configure a Cluster
If you already have a deployed cluster, move to section 2.3.1.
To create and configure the cluster, run the following commands:
./cli.sh platform db-create-mountpoint
./cli.sh smc install
./cli.sh platform install-secrets
This commands
- create database mount point
- initialize a cluster
- install the secrets.
To use GPU in a cluster, install NVIDIA device plugin:
./cli.sh smc nvidia install
2.1.5 Cluster Health Check
After initializing the master node, make sure that all nodes are ready for operation and have the Ready status. You can check this by running the command below:
kubectl get nodes
As a result, the following output will be displayed in the terminal:
NAME STATUS ROLES AGE VERSION
master-node Ready control-plane,master 11d v1.23.8
To check all cluster components, run the following command:
kubectl get all --all-namespaces
2.2 Configure Licensing
2.3.1 Trial activation
The trial period is activated the first time you launch OMNI Platform. Please note that:
- the Internet connection is required;
- running OMNI Platform on a virtual machine is not allowed.
2.2.2 Install a License Server
Before installation, open the license-server.settings.cfg file and set the IP address of the machine, on which the license server will be installed, in the license_server_address field. Run the command below to install the license server. If license_server_address differs from the host address of the machine where the deployment is taking place, it will be set via sshpass.
./cli.sh license-server install
Run the command to start the license server:
./cli.sh license-server start
Check that the license server is in the running status:
./cli.sh license-server status
Console output example:
floatingserver.service - Floating License Server
Loaded: loaded (/etc/systemd/system/floatingserver.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2022-12-20 12:25:54 +05; 1min 48s ago
To check that the license server is available, follow http://<license_server_address>:8090
in your web browser. As a result, you should be redirected to the login form.
2.2.3 Online License Activation
Before activation, make sure that the key
field (from file ./cfg/license-server.settings.cfg) contains the license key.
Run the license activation command:
./cli.sh license-server activate
When license is successfully activated, the console will return the following result:
[2022-12-20 12:25:53+05:00] INF Activating license key...
[2022-12-20 12:25:54+05:00] INF License activated successfully!
2.2.4 Offline License Activation
Before activation, make sure that the license_key
field (from file ./cfg/license-server.settings.cfg) contains the license key.
For offline activation, set "1" in the enable_offline_activation
field in the license-server.settings.cfg file.
Run the command below to generate an offline license request:
./cli.sh license-server generate-offline
As a result, the request-offline.dat
file should appear in the setup directory.
Send the generated request-offline.dat request file to support-platform@3divi.com
. The license file will be submitted in the response email.
Copy the received license file to the setup folder.
Open the configuration file license-server.settings.cfg in a text editor and fill in the variable license_offline_activation_file
with the license file name and its extension, if present, separated by a dot.
Run the license activation command:
./cli.sh license-server activate
When license is successfully activated, the console will return the following result:
[2022-09-08 01:30:36+05:00] INF Offline activating license key...
[2022-09-08 01:30:36+05:00] INF License activated successfully!
2.2.5 Check the License Status After Activation
Run the command to get the license status:
./cli.sh license-server status-license
Example console output:
Activation Status: OK
License key: YOUR-LICENSE-KEY
Days Left To Expiration: 100
Compare the command output with your license information. Days Left To Expiration=14 means that the trial license was automatically activated. In this case, repeat the activation steps.
2.3 Deployment
2.3.1 Launch Deployment
For facial recognition, OMNI Platform uses the Image API services, which must be installed before installing OMNI Platform using the command below:
./cli.sh image-api install
To install OMNI Platform, run the following script:
./cli.sh platform install
To monitor the deployment progress, open another terminal tab and enter the following command:
$ watch 'kubectl get pods'
After startup, check the license status. If the received data is not true, insert the license key into the fields license_key
(in ./cfg/license-server.settings.cfg) and license-secret.key
(in ./cfg/platform. secrets.json) and perform all the steps from para. 7.5.
2.3.2 Configure DNS Server
To provide access to OMNI Platform, DNS server on your network should contain a record that domain is available at <external_ip_address>
.
For testing you need to fill in IP address and domain in the /etc/hosts file on Linux or C:\Windows\System32\drivers\etc\hosts on Windows.
To do this, add a new line like <external_ip_address> <host>
at the end of this file, set the values for the corresponding variables and save the file. Note that you need to have administrator privileges to edit the hosts file.
To use OMNI Platform at the machine where it was deployed, you can use a script below. It will automatically add the necessary entry to the /etc/hosts file.
./cli.sh generic add-dns - <external_ip_address> <domain>
2.3.3 Description of Deployed System
To get the status of OMNI Platform services, run the command below:
kubectl get pods
As result, the console will display a list of services, their statuses, the number of restarts, and the service age.
The example of console output:
NAME READY STATUS RESTARTS AGE
image-api-age-estimator-dep-c66c8f575-zb84s 1/1 Running 0 24h
image-api-body-detector-dep-5588d96ddc-lnrrx 1/1 Running 0 24h
image-api-emotion-estimator-dep-b6c947ff-mz7cw 1/1 Running 0 24h
image-api-face-detector-face-fitter-dep- 1/1 Running 0 24h
image-api-face-detector-liveness-estimator-dep- 2/2 Running 0 24h
image-api-face-detector-template-extractor-dep- 1/1 Running 0 24h
image-api-gender-estimator-dep-7948d8cf85-4hk2q 1/1 Running 0 24h
image-api-mask-estimator-dep-5ccfcc8cb9-sjnqz 1/1 Running 0 24h
image-api-quality-assessment-estimator-dep- 1/1 Running 0 24h
image-api-verify-matcher-dep-6b8f5b4d6f-jjqr4 1/1 Running 0 24h
platform-activity-matcher-dep-8697574c9b-xvh5z 2/2 Running 0 24h
platform-agent-sync-dep-899c9ddc8-7qcvj 1/1 Running 0 24h
platform-backend-dep-685675d648-ngcrj 1/1 Running 0 24h
platform-event-service-dep-5fcd66f999-24jjg 1/1 Running 0 24h
platform-gateway-dep-f4cd7db78-h86jz 1/1 Running 0 24h
platform-licensing-dep-84d78745d6-2flk2 1/1 Running 0 24h
platform-matcher-dep-6495447f45-5hdsp 1/1 Running 0 24h
platform-memcached-dep-64d9d7f6f7-b5bpd 1/1 Running 0 24h
platform-postgres-dep-67c5b75b84-fm8wr 1/1 Running 0 23h
platform-processing-dep-7fdf8699b5-682sn 1/1 Running 0 24h
platform-quality-dep-76995ff9c7-kwsz7 1/1 Running 0 24h
platform-rabbit-dep-69cd659f8c-sbnk2 1/1 Running 0 24h
platform-redis-dep-694df659f-9vhrh 1/1 Running 0 24h
Overview of the services is given below:
- platform-activity-matcher-dep is the service used to search for people by activities;
- image-api-age-estimator-dep is the service used to estimate a person’s age from a face image;
- platform-agent-sync-dep is the service responsible for synchronization of profile data with OMNI Agents;
- platform-backend-dep is the main container of OMNI Platform, responsible for how most of API works;
- image-api-body-detector-dep is the service designed to detect bodies in an image;
- platform-rabbit-dep is RabbitMQ service used to work with asynchronous task queue;
- platform-cache-dep is Memcached service used for data caching;
- platform-postgres-dep is an instance of PostgreSQL database that stores all information of OMNI Platform;
- image-api-emotion-estimator-dep is the service that estimates a person's emotions from a face image;
- image-api-face-detector-face-fitter-dep is the service used to determine the anthropometric points of the face and the head rotation angles;
- image-api-face-detector-liveness-estimator-dep is the service detect a face and determine if the face in the image is real or fake;
- image-api-face-detector-template-extractor-dep is the service used to detect faces and extract biometric templates;
- platform-gateway-dep is the nginx service, responsible for access to OMNI Platform and for the operation of OMNI Platform web interface;
- image-api-gender-estimator-dep is the service used to estimate a person’s gender from a face image;
- platform-licensing-dep is the service that limits the Platform capabilities according to the license parameters;
- image-api-mask-estimator-dep is the service that detects if a person in the image is wearing a medical mask;
- platform-matcher-dep is the service responsible for searching a person in the database;
- platform-processing-dep is the service used to accumulate the results of the work of handler services (age-estimator-dep, emotion-estimator-dep, gender-estimator-dep, face-detector-face-fitter-dep, mask-estimator-dep, face-detector-liveness-estimator-dep);
- image-api-quality-assessment-estimator-dep is the service designed to assess the quality of the face image;
- platform-quality-dep is the service responsible for calculating the image quality (deprecated, used for backward compatibility);
- platform-redis-dep is Redis service used to work with WebSockets;
- image-api-verify-matcher-dep is the service that compares two face images to determine if they belong to the same person.
- platform-event-service-dep is the service used to handle events coming from OMNI Agent.
2.3.4 Scalability
When the load increases, the following services can be scaled in manual mode to stabilize OMNI Platform operation:
- platform-processing-dep
- image-api-face-detector-liveness-estimator-dep
- image-api-age-estimator
- image-api-gender-estimator
- image-api-mask-estimator
- image-api-emotion-estimator
- image-api-face-detector-template-extractor-dep
To scale the service, run the command below:
kubectl scale deployment <SERVICE_NAME> --replicas <COUNT>
where SERVICE_NAME is a service name (for example, gateway-dep), and COUNT is a number of service instances.
When using GPU acceleration, the processing-dep service supports only one instance, which means that to scale it, you need to have N available video accelerators in the cluster. If you don't have multiple GPUs, you can increase the utilization by changing the processing.workers
setting in the ./cfg/platform.values.yaml file.
To keep the load of A requests/sec for image processing, on a server with a physical number of CPU cores equal to B, you should set the value of replicas of each of the specified services according to the formula min(A, B).
To save the scaling settings open the ./cfg/platform.values.yaml and ./cfg/image-api.values.yaml files, find the replicas
field in the service module, and set new values.
For the next installations of the platform and image-api, services will be automatically scaled to the specified values.
When the number of image search requests increases, you'll need to increase the number of replicas of the image-api-face-detector-template-extractor-dep service. You can also increase the number of threads that are used to extract the biometric template (For more details, see paragraph 7.6).
When the number of requests for detection, creating profiles, and creating samples increases, you'll need to increase the number of service replicas:
- platform-processing-dep
- image-api-face-detector-liveness-estimator-dep
- image-api-age-estimator
- image-api-gender-estimator
- image-api-mask-estimator
- image-api-emotion-estimator