Skip to main content
Version: 1.5.0

Installation

Deployment preparation

Downloading

Download and unpack the BAF distribution kit onto the machine where you plan to install it. To do this, use the command below:

$ curl --output baf.zip distribution link

In the curl request, specify a link to the BAF distribution kit (zip file). A link to the folder that contains the distribution kit and pdf documentation will be sent to you in the email.

Next, move the license file face_sdk.lic (the file is attached to the email) to the setup folder.

Contents of the distribution kit:

  • ./cli.sh: entry point to run the commands
  • ./cfg: folder with configuration files

Further commands are to be executed in the system console from setup directory.

Software installation

To install Docker, Kubernetes and Helm on Ubuntu, use a script supplied with the distribution kit (the Internet connection is required).

$ ./cli.sh smc package install

Uploading of images

First, upload the product images from the archive to the local registry:

$ ./cli.sh generic load-images

Then, upload the infrastructure images from the archive to the local registry:

$ ./cli.sh smc load-images

Configuration

Basic configuration

Enter environment variables

Open the configuration files below in a text editor, set values for the variables and save changes.

Configuration FileVariables
./cfg/smc.settings.cfg
  • apiserver_advertise_address: address of the kube-apiserver. In most cases this is the machine's internal IP address.
  • external_ip_address: address for the ingress-controller. Here you need to specify the machine's external IP address.
./cfg/license-server.settings.cfg
  • license_key: this key is usually sent in the accompanying email to the distribution kit. To get the key, contact your sales manager. This key is not required for the trial period.
  • license_server_address: address of the license server.
./cfg/platform.secrets.json
  • license-secret: here you need to set a license key and address of the license server instead of license_server_address.
  • docker-registry: values you need to specify if external docker-registry is used.
  • rabbit-secret : user name and password to get access to a message broker, used for internal interaction of OMNI Platform services. Set an arbitrary name that consists of Latin letters without spaces and a password that consists of Latin letters and numbers without spaces.
  • postgres-root-credentials: user name and password for root user in the database.
  • platform-service-key: secret key required for internal communication between OMNI Platform services.
  • platform-user-secret : user credentials (email address and password) used for access to OMNI Platform. Users will be created automatically on the first deployment. The specified email address must include numbers, Latin letters and symbols.
  • platform-email-secret: SMTP server settings. To disable sending emails, leave these fields blank.
./cfg/platform.values.yaml
  • backend.query_limit: this value limits the number of returned elements in API requests to get system objects. Increasing this limit is not recommended, as API request runtime may increase several times. Also, please note that increasing the limits will lead to the system degradation.
  • backend.index_update_period: this value in seconds describing the amount of time the added profile will appear in the search index. The default value is 60 seconds. To speed up the index updating, decrease the parameter value.
  • backend.enable_profile_autogeneration: auto-generation of profiles for incoming activities from the agent. Please note that enabling this option will increase the consumption of license resources (database size). To enable this function, set the value to 1.
  • ingress.rules.gateway.host: domain name or ip address used in ingress to route requests to Kubernetes services.
  • postgres.enable: disable if using your own database server. You need to change the values for postgres-root-credentials in the ./cfg/platform.secrets.json file and the values for postgres.host and postgres.port in the current file.
./cfg/baf.secrets.json
  • baf-user-secret: credentials (email address and password) that will be used to access BAF. Please note that the minimum password length is 6 characters, the password must consist of Latin letters and numbers, at least one capital letter and the special character “!”.
  • dvs-token: public and secret tokens for DVS.
  • platform-token: token for connecting to OMNI Platform.
  • baf-postgres: data for connecting to the database.
  • ATTENTION! Fields dvs-token and platform-token are filled in during installation.
./cfg/baf.values.yaml
  • baf.dvs.url: URL to deployed DVS. Attention! If DVS is not installed, you need to replace the line with an empty one.
  • lrs.enabled: the switch for working with LRS. Set 0 if you plan to work with documents.
  • ingress.rules.gateway.host: domain used in ingress to route requests to Kubernetes services for BAF.
./cfg/platform-ui.values.yaml
  • ingress.host: domain name, used in ingress to route requests to Kubernetes services. The value must match the value of ingress.rules.gateway.host in the ./cfg/platform.values.yaml file.
./cfg/lrs.secrets.json
  • lrs-tokens: access token and encryption token.
  • lrs-postgres: data for connecting to the database.
  • lrs-minio: data for connecting to object storage.
  • ATTENTION!: the lrs-tokens fields are filled in during installation.
./cfg/lrs.values.yaml
  • minio.data_retention_days: data retention period in days in object storage (Video, reference frames, templates).
  • minio.enable: disable if using your own minio. You need to change the values for lrs-minio in the ./cfg/lrs.secrets.json file and the values for minio.host, minio.port and minio.secure (1 for https, 0 for http) in the current file.
  • lrs.obtain_ref_frame: disable if you don't need to compare frames from the video.
./cfg/stunner.secrets.json
  • stunner-auth-secret.type: credentials type. Should always be ephemeral.
  • stunner-auth-secret.secret: secret that will be used to generate TURN connection credentials.
Docker settings for GPU usage (optionally)

Docker configuration

To set nvidia-container-runtime as the default low-level runtime, add the following lines to the configuration file located at /etc/docker/daemon.json:

{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
}

Applying configuration

To apply configuration, restart docker-service using the command below:

$ sudo systemctl restart docker

Check that the docker service is running successfully:

$ sudo systemctl status docker

Extended configuration

GPU settings

To enable GPU in BAF you need to edit the file ./cfg/image-api.values.yaml. To do this, set 1 in the variable processing.services.face-detector-template-extractor.configs.recognizer.params.use_cuda:

img.pnf

Set 1 for processing.services.face-detector-template-extractor.resources.limits.gpu:

img.pnf

Install and configure a cluster

note

If you already have a deployed cluster, move to Launch Deployment section.

To create and configure the cluster, run the following commands:

$ ./cli.sh smc install 
$ ./cli.sh platform db-create-mountpoint
$ ./cli.sh platform install-secrets

This commands:

  • Create database mount point.
  • Initialize a cluster.
  • Install the secrets.

To use GPU in a cluster, install NVIDIA device plugin:

$ ./cli.sh smc nvidia install

Cluster health check

After initializing the master node, make sure that all nodes are ready for operation and have the Ready status. You can check this by running the command below:

$ kubectl get nodes

As a result, the following output will be displayed in the terminal:

NAME          STATUS      ROLES                   AGE     VERSION
master-node Ready control-plane,master 11d v1.23.8

To check all cluster components, run the following command:

$ kubectl get all --all-namespaces

Configure licensing

The user has 3 license activation options: trial period activation, online and offline license activation.

Install and run a license server

Before installation, open the license-server.settings.cfg file and set the IP address of the machine, on which the license server will be installed, in the license_server_address field.

Run the command below to install the license server. If license_server_address differs from the host address of the machine where the deployment is taking place, it will be set via sshpass.

$ ./cli.sh license-server install

Check that the license server is in the Running status:

$ ./cli.sh license-server status

Console output example:

floatingserver.service - Floating License Server
Loaded: loaded (/etc/systemd/system/floatingserver.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2022-12-20 12:25:54 +05; 1min 48s ago

To check that the license server is available, follow http://<license_server_address>:8090 in your web browser. As a result, you should be redirected to the login form.

Trial period activation

Please note that:

  • The Internet connection is required.
  • Running on a virtual machine is not allowed.

The trial period is activated the first time you launch BAF.

Online license activation

Before activation, make sure that the key field (from file ./cfg/license-server.settings.cfg) contains the license key.

Run the license activation command:

$ ./cli.sh license-server activate

When license is successfully activated, the console will return the following result:

[2022-12-20 12:25:53+05:00] INF Activating license key...
[2022-12-20 12:25:54+05:00] INF License activated successfully!

Offline license activation

Before activation, make sure that the license_key field (from file ./cfg/license-server.settings.cfg) contains the license key.

note

For offline activation, set "1" in the enable_offline_activation field in the license-server.settings.cfg file.

Run the command below to generate an offline license request:

$ ./cli.sh license-server generate-offline

As a result, the request-offline.dat file should appear in the setup directory.

Send the generated request-offline.dat request file to baf-request@3divi.com. The license file will be submitted in the response email.

Copy the received license file to the setup folder.

Open the configuration file license-server.settings.cfg in a text editor and fill in the variable license_offline_activation_file with the license file name and its extension, if present, separated by a dot.

Run the license activation command:

$ ./cli.sh license-server activate

When license is successfully activated, the console will return the following result:

[2022-09-08 01:30:36+05:00] INF Offline activating license key...
[2022-09-08 01:30:36+05:00] INF License activated successfully!

Activation of the document recognition subsystem (DVS)

If you don't plan to use BAF for document processing, skip this step.

To activate the subsystem, move the license file License.json (the file is attached to the email) to the setup/modules/dvs/ folder.

Deployment

Launch deployment

Install facial recognition subsystem (OMNI Platform)

Run the installation of the first OMNI Platform module:

$ ./cli.sh image-api install 

Run the installation of the second OMNI Platform module:

$ ./cli.sh platform install 

If necessary, run the installation of OMNI Platform web interface:

./cli.sh platform-ui install

To continue the installation, open the /etc/hosts file and add the following lines at the end of the file:

<external_ip_address> : <platform_domain>
<external_ip_address> : <baf_domain>

Install document recognition subsystem (DVS) (optionally)

If you do NOT plan to work with documents, skip this step.

Run DVS installation:

$ ./cli.sh dvs install 

Initialize the database in DVS (it is recommended to wait at least 10 seconds before using the command below):

$ ./cli.sh dvs init-db 

When initialization is successfully completed, the console will return the following message:

INSERT 0 2
INSERT 0 2

Next, you need to get tokens from the deployed subsystem:

$ ./cli.sh dvs get-token

As a result, you will receive two tokens that must be written in the configuration file ./cfg/baf.secrets.json in the dvs-token section. Also in the file ./cfg/baf.values.yaml specify baf.dvs.url in the form http://<external_ip_address>:5100.

Install the subsystem for estimation of Liveness Reflection (LRS) (optional)

If you plan to work with “Registration by Selfie and Document” scenario (document recognition subsystem), skip this step.

In the Registration by Selfie scenario LRS provides the ability to save video attempts (beta) and detect a video stream injection attack.

Create a directory to store the object store data using the command:

$ ./cli.sh lrs minio-create-mountpoint

Run the command to generate LRS tokens:

$ ./cli.sh lrs generate-token

Example of output to the console:

sha256:2473ba0ebf5ef66cd68b252bba7b46ae9f7cc3657b5acd3979beb7fbc5d8807f
Fernet key: ......
Access token: ......

As a result, you will get two tokens that need to be written in the configuration file ./cfg/lrs.secrets.json in the section lrs-tokens.

Run the command to install the LRS secrets:

$ ./cli.sh lrs install-secrets

Run the command to install the LRS:

$ ./cli.sh lrs install

Install the stunner subsystem for proxying requests through TURN server to LRS (optional)

The stunner subsystem is required to successfully establish a connection between client browsers and the LRS subsystem server to record video. Skip this step if you plan to work with documents.

Run the command to install stunner secrets:

$ ./cli.sh stunner install-secrets

Run the command to install the stunner:

$ ./cli.sh stunner install

Use the following commands to verify that the stunner is running:

$ kubectl get pods | grep stunner
$ kubectl get svc | grep -P 'tcp|udp|stunner'

If all pods have a Running status and 5 services are running, then stunner is running successfully.

Run the command to obtain the ports on which the TURN server is accessible from the outside:

$ ./cli.sh stunner get-ports

Example output of the command:

tcp-gateway: 31021
udp-gateway: 30796

Install BAF

Get a token from OMNI Platform:

$ ./cli.sh platform get-token - http://<platform_domain> <platform_user_email>

As a result, you will receive a token that must be written to the configuration file ./cfg/baf.secrets.json in the platform-token section.

Next, initialize the BAF secrets for the cluster:

$ ./cli.sh baf install-secrets 

Run BAF installation:

$ ./cli.sh baf install 

To monitor the deployment process, open another terminal tab and enter the following command:

$ watch 'kubectl get pods'

BAF is running if all pods have the Running status.

Configure DNS server

To provide access to BAF, DNS server on your network should contain a record that domain is available at <external_ip_address>.

For testing you need to fill in IP address and domain in the /etc/hosts file on Linux or C:\Windows\System32\drivers\etc\hosts on Windows.

To do this, add a new line like <external_ip_address> <host> at the end of this file, set the values for the corresponding variables and save the file. Note that you need to have administrator privileges to edit the hosts file.

Scalability

When the load increases, the following services can be scaled in manual mode to stabilize BAF operation:

  • platform-processing-dep is the service used to accumulate the results of the work of handler services (age-estimator-dep, emotion-estimator-dep, gender-estimator-dep, face-detector-face-fitter-dep, mask-estimator-dep, face-detector-liveness-estimator-dep).
  • image-api-face-detector-liveness-estimator-dep is the service used to detect a face and determine if the face in the image is real or fake.
  • image-api-age-estimator is the service used to estimate a person’s age from a face image.
  • image-api-gender-estimator is the service used to estimate a person’s gender from a face image.
  • image-api-mask-estimator is the service that detects if a person in the image is wearing a medical mask.
  • image-api-emotion-estimator is the service that estimates a person's emotions from a face image.

To scale the service, run the command below:

$ kubectl scale deployment <SERVICE_NAME> --replicas <COUNT>

where SERVICE_NAME is a service name (for example, gateway-dep), and COUNT is a number of service instances.

note

When using GPU acceleration, the image-api-face-detector-template-extractor-dep service supports only one instance. So to scale it, you need to have N available video accelerators in the cluster. If you don't have multiple GPUs, you can increase utilization by changing the processing.services.face-detector-template-extractor.workers parameter in the ./cfg/image-api.values.yaml file.

To keep the load of A requests/sec for image processing, on a server with a physical number of CPU cores equal to B, you should set the value of replicas of each of the specified services according to the formula min(A, B).

To save the scaling settings open the ./cfg/platform.values.yaml and ./cfg/image-api.values.yaml files, find the replicas field in the service module, and set new values.

During next installations services will be automatically scaled to the specified values.