Skip to main content
Version: 1.13.0

2. Deployment

2.1 OMNI Platform Update

If you already use OMNI Platform on-premise and get the new version, follow the instructions and commands below to update OMNI Platform, otherwise skip this section.

Attention: Before updating the Platform from version 1.11.0 (and lower) to version 1.12.0 (and higher), to maintain access to data in the database, you need to follow the steps below:

  1. Download and unpack the new Platform version.
  2. Move your existing DBMS access credentials to the configuration file of the new Platform version in on_premise directory. To do this, open the ./setup/settings.env configuration file of the old Platform version and transfer the values of the POSTGRES_USER, POSTGRES_PASSWORD and POSTGRES_DB parameters to the ./deploy/services_db_config.json configuration file of the new Platform version in the storage-engine-db block.
  3. Using the values from the file, create users and databases for each block (except for storage-engine-db) in the DBMS of the old Platform version. You can use the ./setup/db-create.sh script supplied with the new version of the Platform.

An example of using the script is given below:

$ db-create.sh <POSTGRES_USER> <POSTGRES_USER_BLOCK> <POSTGRES_PASSWORD_BLOCK> <POSTGRES_DB_BLOCK>

where <POSTGRES_USER> is the database user created in the old version of the Platform (specified in the ./setup/settings.env file), and <POSTGRES_USER_BLOCK>, <POSTGRES_PASSWORD_BLOCK> and <POSTGRES_DB_BLOCK> are the database and block user data for the corresponding service.

Deleting the old Platform version

  1. Move to on_premise folder and stop OMNI Platform:
$ ./setup/uninstall-platform.sh
  1. Make sure that all service containers are stopped:
$ watch 'kubectl get pods'

Service statuses should change from Running to Terminating. As a result, all services are removed from the console output.

  1. Delete the deployed Kubernetes cluster:
$ sudo kubeadm reset
  1. Delete auxiliary files of Kubernetes cluster:
$ sudo rm -rf ~/.kube/
  1. Reset IPVS tables of your system:
$ sudo ipvsadm --clear
note

When the Platform is deleted, the entire database is saved in the /kv/pgdata directory. For further use of this database you need to specify the same authorization data and database name when installing the new version of the Platform. Otherwise, delete the /kv/pgdata folder (sudo rm -rf /kv/pgdata) to create a new database during Platform deployment.

note

After updating OMNI Platform, you’ll need to reset the platform settings, unless you save them before updating. In the latter case, after updating, that will be enough to replace the files with default settings with files with saved values (if the fields in the old and new files completely match). The settings represent the values of the environment variables specified in the ./setup/settings.env file (See section 2.2.3 for details), as well as the scalability settings specified in the ./deploy/values.yaml file (See section 2.4.4 for details).

2.2 Upload Images and Create a Kubernetes Cluster

2.2.1 Preparation

Download and extract OMNI Platform distribution kit to the machine used for installation. You'll get the link to the distribution kit in email.

Next, move the face_sdk.lic license file (attached to the email) to the on_premise folder.

Open a system console, move to on-premise directory of the distribution kit and check folder contents by running a command:

$ find -maxdepth 1

As a result, files and folders from the distribution kit will be shown in the console:

./deploy
./face_sdk.lic
./ingress-nginx-4.2.0.tgz
./OMNIAgent_Linux_x64.run
./OMNIAgent_Windows_x64.exe
./integration_tests
./kube-flannel.yml
./license_server
./nvidia-device-plugin-0.12.2.tgz
./pdf_docs
./platform_images.tar.gz
./setup
./upload_script

Contents of the distribution kit:

  • ./pdf_docs/administrator_guide.pdf is an administrator guide to deploy, test and debug OMNI Platform;
  • ./pdf_docs/user_guide.pdf is a user guide to start up and use OMNI Platform;
  • ./pdf_docs/integration_api.pdf is OMNI Platform API reference;
  • ./pdf_docs/release-notes.pdf is release notes with information about new features, bug fixes and improvements;
  • ./pdf_docs/agent_user_guide.pdf is a separate user guide for OMNI Agent;
  • ./OMNIAgent_Linux_x64.run and ./OMNIAgent_Windows_x64.exe are OMNI Agent installation files for Windows and Linux;
  • ./license_server are files required for launching a license server for OMNI Platform licensing;
  • ./integration_tests are scripts for OMNI Platform automatic testing after deployment;
  • ./setup/settings.env is a configuration file of OMNI Platform instance.
  • ./upload_script is a folder that contains an image upload script for creating profiles on the Platform from the dataset images.

Further commands are to be executed in the system console from on_premise directory.

2.2.2 Upload Images

Upload images from the archive to the local registry:

$ sudo docker load -i platform_images.tar.gz

Uploading can last for about 5 minutes.

2.2.3 Enter Environment Variables

Open a configuration file ./setup/settings.env in a text editor and set values for the following variables:

  • MASTER_NODE_IP_ADDRESS: IP address of the machine which OMNI Platform is deployed to. You can get IP address from your system administrator;
  • DOMAIN: root domain name. After deployment the access to OMNI Platform web interface and API is provided via http://platform.$DOMAIN. IP address for domain name platform.$DOMAIN should be configured at DNS server (See more details at 2.4.2 of this guide);
  • RABBIT_USER, RABBIT_PASSWORD: user name and password to get access to a message broker, used for internal interaction of OMNI Platform services. Set an arbitrary name that consists of Latin letters without spaces and a password that consists of Latin letters and numbers without spaces;
  • POSTGRES_USER, POSTGRES_PASSWORD and POSTGRES_DB: database connection settings. At first deployment set an arbitrary user name and database name that consists of Latin letters without spaces, and generate a password that consists of Latin letters and numbers. The database will be created automatically;
  • SERVICE_KEY: a private key used for internal interaction of OMNI Platform services. Generate an arbitrary line that consists of Latin letters and numbers withour spaces;
  • LIC_KEY : a license key. This key is usually sent in the accompanying letter to the distribution kit. To get the key, contact your sales manager;
  • PLATFORM_ADMIN_EMAIL, PLATFORM_ADMIN_PASSWORD: credentials used to get access to OMNI Platform administrator web interface. At first deployment the system automatically creates a user with administrator privileges. Enter a valid email and generate a password that consists of Latin letters and numbers, at least 8 characters long;
  • PLATFORM_DEFAULT_EMAIL, PLATFORM_DEFAULT_PASSWORD : user credentials for accessing OMNI Platform web interface. The user will be created automatically at first deployment. Enter a valid email and generate a password consisting of Latin letters and numbers, at least 8 characters long;
  • EMAIL_HOST, EMAIL_PORT, EMAIL_HOST_USER, EMAIL_HOST_PASSWORD: SMTP server access settings. SMTP server is used to send emails for password reset, notifications, etc. To disable email sending, leave these fields blank. To obtain SMTP server access parameters, contact your network administrator;
  • EMAIL_USE_SSL: The value enables/disables the SSL protocol, set it to true/false, respectively. If you are not using an SMTP server, set the value to false;
  • EMAIL_FROM: the value sent in FROM header and displayed as an email sender. The format requirements for this field may vary depending on SMTP server. An example of a FROM field value - "Bob Example" <bob@example.org>;
  • QUERY_LIMIT: The value limits the number of returned elements in API requests to get system objects. Increasing this limit is not recommended, as API request run time may increase several times. Also, please note that increasing the limits will lead to the system degradation.
  • INDEX_UPDATE_PERIOD: The value in seconds describing the amount of time the added profile will appear in the search index. The default value is 60 seconds. To speed up the index updating, decrease the parameter value.
  • ENABLE_PROFILE_AUTOGENERATION : auto-generation of profiles for incoming activities from the agent. Please note that enabling this option will increase the consumption of license resources (database size). If this function is not required, leave the field empty, otherwise set the value to 1.
  • USE_CUDA : responsible for use of CUDA cores in services for image processing. 0 - GPU disabled, 1 - GPU enabled for processing service.

Ensure that you saved the file after editing.

2.2.4 Extended Configuration

The distribution kit contains the configuration file ./deploy/values.yaml. In the env section of this file you can change the configuration of system elements.

note

The parameter values are set by default. It is recommended to change the parameter values only if necessary.

Parameters of values.yaml configuration file

  • ACTIVITY_TTL: The time the activities are stored in the database. The value is given in seconds. For example, 2592000 seconds (30 days)
  • SAMPLE_TTL: The time the samples are stored in the database. The value is given in seconds. For example, 2592000 seconds (30 days)
  • FACE_SDK_PARAMETERS: Detector parameters:
    • score_threshold: detection threshold, from 0 to 1. The higher the threshold value, the more faces the detector can recognize;
    • min_size*, max_size**: minimum and maximum face size for detection, in pixels. Note: add this parameter to the configuration file manually

2.2.5 Install and Configure a Cluster

note

If you already have a deployed cluster, go to section 2.2.7.

To create and configure the cluster, run the following command:

$ ./setup/init-cluster.sh

This command initializes a node for cluster deployment, creates secrets and necessary folders and installs ingress controller and nvidia-device-plugin, if graphics card is enabled.

2.2.6 Cluster Health Check

After initializing the master node, make sure that all nodes are ready for operation and have the Ready status. You can check this by running the command below:

$ kubectl get nodes

As a result, the following output will be displayed in the terminal:

NAME          STATUS      ROLES                   AGE     VERSION
master-node Ready control-plane,master 11d v1.23.8

To check all cluster components, run the following command:

$ kubectl get all --all-namespaces

2.2.7 Using Local Database

note

Local database is used by default.

To start using the local database, make sure that the server, where the database will be deployed, contains /kv/pgdata directory, otherwise create this directory using the commands below:

$ sudo mkdir -p /kv/pgdata
$ sudo chmod -R 777 /kv/pgdata

2.3 Configure Licensing

note

If you have updated OMNI Platform using the commands from section 2.1, proceed directly to section 2.4.

2.3.1 Trial activation

Please note that:

  • the Internet connection is required;
  • running OMNI Platform on a virtual machine is not allowed.

The trial period is activated the first time you launch OMNI Platform. To activate the trial period, go to section 2.4.

2.3.2 Install a License Server

Before installation, open the ./setup/settings.env file and set the IP address of the machine, on which the license server will be installed, in the LICENSE_SERVER_IP_ADDRESS variable.

If the license server is running on the same machine where the cluster is deployed, LICENSE_SERVER_IP_ADDRESS will be the same as MASTER_NODE_IP_ADDRESS.

Run the command to install and start the license server:

$ ./setup/install-lic-server.sh

Check that the license server is in the running status:

$ ./setup/status-license-server.sh

Console output example:

floatingserver.service - Floating License Server
Loaded: loaded (/etc/systemd/system/floatingserver.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2022-12-20 12:25:54 +05; 1min 48s ago

To check that the license server is available, follow http://<LICENSE_SERVER_IP_ADDRESS>:8090 in your web browser. As a result, you should be redirected to the login form.

2.3.3 Online License Activation

If the machine, on which the license server is installed, does not have the Internet connection, skip this section and go to section 2.3.3.

Before activation, make sure that the LIC_KEY variable (from file ./setup/settings.env) contains the license key.

Run the license activation command:

$ ./setup/activate-lic-server.sh

When license is successfully activated, the console will return the following result:

[2022-12-20 12:25:53+05:00] INF Activating license key...
[2022-12-20 12:25:54+05:00] INF License activated successfully!

After activation is successfully completed, go to section 2.4.

2.3.4 Offline License Activation

Before activation, make sure that the LIC_KEY variable (from file ./setup/settings.env) contains the license key.

Run the command below to generate an offline license request:

$ ./setup/activate-lic-server.sh --generate-offline

As a result, the request-offline.license file should appear in the on_premise directory.

Send the generated request-offline.license request file to support-platform@3divi.com. The license file will be submitted in the response email.

Copy the received license file to the on_premise folder.

Open the configuration file ./setup/settings.env in a text editor and fill in the variable OFFLINE_LICENSE_FILE with the license file name and its extension, if present, separated by a dot.

Run the license activation command:

$ ./setup/activate-lic-server.sh --activate-offline

When license is successfully activated, the console will return the following result:

[2022-09-08 01:30:36+05:00] INF Offline activating license key...
[2022-09-08 01:30:36+05:00] INF License activated successfully!

2.4. OMNI Platform Deployment

2.4.1 Launch Deployment

  1. To deploy OMNI Platform in the cluster, run the following script:
$ ./setup/deploy.sh
  1. To monitor the deployment progress, open another terminal tab and enter the following command:
$ watch 'kubectl get pods'
OMNI Platform is running if all pods have the *Running* status.

2.4.2 Configure DNS Server

To provide access to OMNI Platform, DNS server on your network should contain a record that <DOMAIN> domain is available at <MASTER_NODE_IP_ADDRESS>. Variable values ​​can be obtained from ./setup/settings.env file filled in in section 2.2.3. To complete this configuration, contact your network administrator.

For testing you need to fill in IP address and domain in the /etc/hosts file on Linux or C:\Windows\System32\drivers\etc\hosts on Windows.

To do this, add a new line like <MASTER_NODE_IP_ADDRESS> <DOMAIN> at the end of this file, set the values for the corresponding variables and save the file. Note that you need to have administrator privileges to edit the hosts file.

To use OMNI Platform at the machine where it was deployed, you can use a script below. It will automatically add the necessary entry to the /etc/hosts file.

$ ./setup/add-dns.sh

2.4.3 Description of Deployed System

To get the status of OMNI Platform services, run the following command:

$ kubectl get pods

As result, the console will display a list of services, their statuses, the number of restarts, and the service age.

The example of console output:

NAME                                                        READY          STATUS         RESTARTS         AGE
activity-matcher-dep-6fdc8bfbd5-pl8ql 1/1 Running 2 (24h ago) 24h
age-estimator-dep-544cdfd7c-4khhh 1/1 Running 0 24h
agent-sync-dep-854474ddc9-ntd2w 1/1 Running 2 (24h ago) 24h
backend-dep-74c5d86d77-wldcn 1/1 Running 2 (24h ago) 24h
body-detector-dep-788dc59547-5jqtf 1/1 Running 0 24h
broker-dep-f6dfdf55b-kl76k 1/1 Running 0 24h
cache-dep-7dbc644bcf-6qpzb 1/1 Running 0 24h
db-dep-cf96d8d4c-btmbf 1/1 Running 0 24h
emotion-estimator-dep-764c8d8669-wzssn 1/1 Running 0 24h
face-detector-face-fitter-dep-8585d54d67-j2k88 1/1 Running 0 24h
face-detector-liveness-estimator-dep-66c8789ddb-x4h95 2/2 Running 0 24h
face-detector-template-extractor-dep-6b844fdfd9-tjprf 1/1 Running 0 24h
gateway-dep-67c7d6f4c7-5lpsb 1/1 Running 0 24h
gender-estimator-dep-7b7d859c6f-n9f76 1/1 Running 0 24h
image-api-dep-6dc7f868f6-gz56v 1/1 Running 0 24h
licensing-dep-967cc7b65-wg6jq 1/1 Running 0 24h
mask-estimator-dep-7db6779bc5-nnwt5 1/1 Running 0 24h
matcher-dep-696d66b65b-fqn9z 1/1 Running 2 (24h ago) 24h
processing-dep-f7d7867f6-25tjl 1/1 Running 0 24h
quality-assessment-estimator-dep-76fcfdf6cf-zjldq 1/1 Running 0 24h
quality-dep-86cc5488d9-22tkn 1/1 Running 0 24h
redis-dep-5d8cd4d657-7vw8c 1/1 Running 0 24h
securos-integration-service-dep-77f98b497d-66q 1/1 Running 0 24h
verify-matcher-dep-85ddfdfd4f-7t7br 1/1 Running 0 24h

Overview of the services is given below:

  • activity-matcher-dep is the service used to search for people by activities;
  • age-estimator-dep is the service used to estimate a person’s age from a face image;
  • agent-sync-dep is the service responsible for synchronization of profile data with OMNI Agents;
  • backend-dep is the main container of OMNI Platform, responsible for how most of API works;
  • body-detector-dep is the service designed to detect bodies in an image;
  • broker-dep is RabbitMQ service used to work with asynchronous task queue;
  • cache-dep is Memcached service used for data caching;
  • db-dep is an instance of PostgreSQL database that stores all information of OMNI Platform;
  • emotion-estimator-dep is the service that estimates a person's emotions from a face image;
  • face-detector-face-fitter-dep is the service used to determine the anthropometric points of the face and the head rotation angles;
  • face-detector-liveness-estimator-dep is the service detect a face and determine if the face in the image is real or fake;
  • face-detector-template-extractor-dep is the service used to detect faces and extract biometric templates;
  • gateway-dep is the nginx service, responsible for access to OMNI Platform and for the operation of OMNI Platform web interface;
  • gender-estimator-dep is the service used to estimate a person’s gender from a face image;
  • image-api-dep is ImageAPI service available at URL /image-api/ (deprecated, used for backward compatibility);
  • licensing-dep is the service that limits the Platform capabilities according to the license parameters;
  • mask-estimator-dep is the service that detects if a person in the image is wearing a medical mask;
  • matcher-dep is the service responsible for searching a person in the database;
  • processing-dep is the service used to accumulate the results of the work of handler services (age-estimator-dep, emotion-estimator-dep, gender-estimator-dep, face-detector-face-fitter-dep, mask-estimator-dep, face-detector-liveness-estimator-dep);
  • quality-assessment-estimator-dep is the service designed to assess the quality of the face image;
  • quality-dep is the service responsible for calculating the image quality (deprecated, used for backward compatibility);
  • redis-dep is Redis service used to work with WebSockets;
  • securos-integration-service-dep is the service responsible for integration with SecurOS;
  • verify-matcher-dep is the service that compares two face images to determine if they belong to the same person.

2.4.4 Scalability

When the load increases, the following services can be scaled in manual mode to stabilize OMNI Platform operation:

  • processing-dep
  • quality-dep
  • face-detector-liveness-estimator-dep
  • backend-dep
  • gateway-dep
  • ingress-nginx-controller`

These services are described in section 2.4.3

To scale the service, run the command below:

$ kubectl scale deployment <SERVICE_NAME> --replicas <COUNT>

where SERVICE_NAME is a service name (for example, gateway-dep), and COUNT is a number of service instances.

note::: To scale ingress-nginx-controller service, add the “-n ingress-nginx” argument to the end of the command. :::

note

When using GPU acceleration, the processing-dep service maintains only one instance, which means it cannot be scaled.

For successful scaling follow the information below:

  1. To process a greater number of concurrent requests, you should scale backend-dep, ingress-nginx-controller and gateway-dep. The number of service instances is specified according to the formula: <REQUESTS>/<CPU_COUNT>, where <REQUESTS> is the desired number of concurrent requests, and <CPU_COUNT> is the number of logical CPU cores.

  2. If most of the requests are related to image processing, scale processing-dep and quality-dep services. The number of service instances should not exceed the number of physical CPU cores.

For example:

To keep the load of A requests/sec for image processing, on a server with a physical number of CPU cores equal to B and a logical number of CPU cores equal to C, the services should be scaled as follows:

  • processing-dep - min(A, B) instances
  • quality-dep - min(A, B) instances
  • face-detector-liveness-estimator-dep - min(A, B) instances
  • backend-dep - A/C instances
  • gateway-dep - A/C instances
  • ingress-nginx-controller - A/C instances

To save the scaling settings, open the ./deploy/values.yaml file, find the service_replicas module, and set the services to the newly selected values.

For the next installations of the platform, services will be automatically scaled to the specified values.

note

To save the scaling parameters of the face-detector-liveness-estimator-dep service, use the configuration file ./deploy/services.json.

2.4.5 Database Backup and Restore

note

Not supported for external databases.

To back up the database, run the command below:

$ ./setup/db-backup.sh <dump_path>

To restore the database, run the following command:

$ ./setup/db-restore.sh <dump_path>

dump_path - path to the database dump

info

To restore data from a backup, you need to have a deployed Platform instance of the same version that was backed up, and use the appropriate ./setup/db-restore.sh script.