Skip to main content
Version: 2.7.1 (latest)

Configuration

You can configure OMNI Agent through the dashboard or by editing files from OMNIAgent folder available at the address:

  • Linux: ~/.local/share/OMNIAgent
  • Windows: %LocalAppData%/OMNI Agent

Configuration files for editing are located in the config folder. For your convenience, configuration files with default settings are available for review in the config.default folder. Note that any value changed in config files takes precedence over the default configuration.

The following basic settings can be modified through the dashboard or by editing config/run_params.json and config/log_params.json configuration files.

info

After adding or changing values in the configuration files, restart OMNI Agent.

Face detection and recognition

OMNI Agent uses detectors and recognition methods from 3DiVi Face SDK, a set of libraries for developing facial recognition solutions. Recognition includes the following operations of comparing biometric facial templates:
  • Verification 1:1 – comparing of two biometric templates (faces) between each other, estimating of coincidence.
  • Identification 1:N – comparing of one biometric template (face) with other templates (faces), searching and estimating of coincidence.

When comparing face templates, matcher calculates the difference between biometric features of faces. The calculations result in a measure of correspondence between face images and the probability of belonging to one person.

Setting through configuration file

You can configure all parameters specified in this section by editing config/run_params.json.

How to enable or disable face detection and recognition

Set the value for pipelines.face.enabled:(Boolean) as true (enabled) or false (disabled).

IMPORTANT

Disabling this module also stops Agent synchronization with the local database.

Face detection and recognition parameters to configure
  • score: the probability of correct recognition. Score value is float in a range [0..1]. A high degree of similarity means that two biometric templates belong to the same person. You can change score value in desired_score (float) field. Recommended parameter value is 0.876.

    When the score is specified, faR and frR are not taken into account in recognition.

    tip

    Make sure the score values specified for OMNI Agent and OMNI Platform match. Otherwise, some of the activities generated from the transferred agent processes will not be linked to the corresponding profile, which means that notifications for such activities won't be received.

    For example:

    • score specified for OMNI Agent = 0.7
    • score specified for OMNI Platform = 0.85

    In this case, activities generated from Agent processes with a score value in the range [0.7, 0.85) will not be attached to the corresponding profile, and notifications for them will also not appear.

  • faR: False acceptance rate (FAR) shows the system resistance to false acceptance errors. Such an error occurs when the biometric system recognizes a new face as previously detected one. This rate is measured by the number of false-acceptance recognitions divided by the total number of recognition attempts. You can change the faR value at desired_far (float) parameter. By default, the faR value equals to 1e-5.

  • frR: False rejection rate (FRR). When a system fails to recognize previously detected face, false rejection occurs. The rate shows the percentage of recognition attempts with false rejection result. You can change the frR value at desired_frr (float) parameter. By default, the frR value equals to 0.

  • Number of identification candidates. Candidate is a profile from the database with which the identification occurred (a face from the submitted frame and a face from the database have a high degree of similarity). The number of allowable candidates can be specified in the field pipelines.face.video_worker_override_parameters.search_k: int. The default value is 3.

How to enable or disable age and gender estimation

Set the value for pipelines.age_gender.enabled:(Boolean) as true (enabled) or false (disabled).

How to enable or disable estimation of emotions

Set the value for pipelines.emotions.enabled:(Boolean) as true (enabled) or false (disabled).

Setting via dashboard

You can configure the parameters mentioned above in Settings of OMNI Agent dashboard.

img.png

Body detection and comparison

Setting through configuration file

How to enable or disable body detection and comparison

Set the value for pipelines.body_detector.enabled:(Boolean) as true (enabled) or false (disabled).

To exclude low-confidence matches, you can change confidence thresholds for body detection and comparison.

  • Body detection confidence threshold is given in detector_confidence (float) parameter and equals to 0.9 by default with a range of values from 0 to 1.

  • Body comparison confidence threshold is given in cos_reident_confidence (float) parameter and equals to 0.6 by default with a range of values from -1 to 1.

Setting via dashboard

You can enable or disable body detection in Settings of OMNI Agent dashboard.

Biometric templates

Template generation settings

When a person enters the frame, OMNI Agent builds a biometric template based on the first image of the face, which passes the image quality threshold. You can enable/disable template generation in the Settings of OMNI Agent web interface (enabled by default).

Number of template generation threads

By default, the generation of biometric templates allocates 1 core (1 thread) of the processor per 1 camera, which may be insufficient when running OMNI Agent in a multi-threaded (multi-camera) mode. To increase the number of template generation threads, open the configuration file config/run_params.json and change the value of the processing scale factor:

  • pipelines.face.processing_scale_factor: float – default value is 1.5

Best shots

A best shot is the highest-quality image of a face captured within a set time interval while the person is in the frame.

By default, OMNI Agent continuously searches for the best face shot at a 3000 ms interval. This means that throughout a person's presence in the camera's field of view, a new best shot is sought every 3000 ms. This setup is ideal for scenarios not requiring instant identification (e.g., safe cities or access control systems with cameras detecting a face seconds before reaching the turnstile), where more time can be spent finding the best shot, potentially yielding a more accurate identification result.

However, for remote identification / access control using biometric terminals where swift entry into an app / turnstile is needed, a 3000 ms interval significantly slows down scenario execution. For such cases, put 0 ms in the "Best shot search timeout" field in the OMNI Agent web interface settings. Consequently, when a person enters the camera's field of view, the first frame passing the quality threshold will automatically be recognized as the best shot and sent for identification.

img.png

Send best shots via webhooks

To send bestshots via webhooks, enable the Send bestshots in webhook option in the Settings of the OMNI Agent web interface, or specify the value true for the enable_webhook_image2jpg_conversion field. The default value is false. The bestshot is encoded in base64 as a jpeg image.

Proxy server

Setting via dashboard

If the user gets access to the Internet through a proxy server, this connection should also be enabled to install and configure OMNI Agent. To use a proxy server, click on Settings icon at OMNI Agent dashboard and select a system or custom proxy server in the opened tab. When choosing a custom proxy server, enter its address in a special field. After changes are completed, click the Save button.

img.png

Port of web configurator

Setting through configuration file

You can configure all parameters specified in this section by editing config/run_params.json.

By default, the web configurator runs on port 8080. To change the port, add the http_server_port: (int) field in the configuration file and specify the desired port for opening the web configurator.

Data retransmission

When the connection is failed, the data sent by Agent is saved in a special storage. When the connection is restored, this data is resent to the server.

Open the configuration file TDV/tdv_connection_params.json and provide values for the following variables:

  • resend_on_success_count: int – number of packets to resend. The default value is 15.
  • data_keeper_max_bytes: int64 – the maximum number of bytes in the storage. The default value is 68719476736 (64GB).

Сropping of original frames

To optimize the processing of video streams and video files, you can transfer not the original, but cropped frames (crops) to OMNI Agent. Crop parameters (coordinates of the upper left corner, width and height of the crop) are indicated in normalized coordinates.

tip

Cropping is recommended so that OMNI Agent does not waste resources processing static objects on the sides of the original frame (for example, a wall, a cabinet, etc.).

Setting through configuration file

To configure cropping, open the configuration file config/run_params.json and add a new object to the web_cams field:

  • frame_crop:
    • x: float – X coordinate of the upper left corner in normalized coordinates of the original frame
    • y: float – Y coordinate of the upper left corner in normalized coordinates of the original frame
    • width: float – Crop width in normalized coordinates
    • height: float – Crop height in normalized coordinates

For example, the web_cams field with the image crop fields filled in will look like this:

{
"web_cams": [
{
"color_camera": {
"creationDate": "2023-10-12T09:41:42.789486+00:00",
"id": "7da98714-ae35-4832-834b-83cf82e4fe7a",
"lastModified": "2023-10-12T09:47:38.190009+00:00",
"real_name": "",
"stream": "rtsp://guest:q2w3e4r5t@192.168.122.154/stream",
"title": "rtsp://192.168.122.97:554",
"type": "IP"
"frame_crop": {
"x": 0.2,
"y": 0.3,
"width": 0.3,
"height": 0.25
}
}
}
]
}
Setting via dashboard

To crop the original frame via the dashboard, click on the Gear sign in the upper right corner of the camera card. On the opened tab click on the Edit icon in the Frame cropping area section.

img.png

This will take you to a page where you can resize the original camera preview by dragging the red edges of the image. After editing, click Save.

img.png

As a result, a preview of the new sizes will be displayed in the dashboard.

img.png

GPU usage

Using CUDA for acceleration

To boost the performance of OMNI Agent using GPU, open the configuration file config/run_params.json and enable the use of CUDA:

Enable CUDA for face detector:

  • pipelines.face.video_worker_override_parameters.use_cuda: bool – true (enabled), false (disabled)

Enable CUDA 10 support for face detector (use_cuda must be true):

  • pipelines.face.video_worker_override_parameters.use_legacy: bool – true (enabled), false (disabled)

Enable CUDA for facial recognition:

  • pipelines.face.recognizer_override_parameters.use_cuda: bool – true (enabled), false (disabled)

Enable CUDA 10 support for facial recognition (use_cuda must be true):

  • pipelines.face.recognizer_override_parameters.use_legacy: bool – true (enabled), false (disabled)

Enable CUDA for body detector:

  • pipelines.body_detector.use_cuda : bool – true (enabled), false (disabled). You need to manually put this field in config/run_params.json.

Enable CUDA for human action recognition (HAR):

  • pipelines.action_recognition.use_cuda : bool – true (enabled), false (disabled). You need to manually put this field in config/run_params.json.
tip

To use CUDA in all tasks except working with faces, you can enable the use_cuda_onnx field in config/run_params.json.

Setting via dashboard

You can enable CUDA for face detection, facial recognition and simultaneously for body detection and HAR in OMNI Agent dashboard settings.

img.png

Logging and traces

Logging

You can view the logs in the logfile_*.log file in the log folder. To change the logging level, go to the log_params.json configuration file and set a value for the sev_level(string) parameter. Available logging levels: TRACE, DEBUG, INFO, WARNING, ERROR, FATAL (listed in ascending order of importance).

Traces

To analyze the processing results and identify arisen problems, it is possible to save a highly detailed processing log. We call this log a trace.

To enable/disable trace collection, open the log_params.json configuration file, find the traces object, set the values for enable (trace enabling, true/false) and trace_interval_in_msec (trace collection interval, integer).

When trace recording is enabled, OMNI Agent will write to the console a message like

"Starting to write the trace...", "Trace written: " <UUID>

After the last message, a new file is written with a name equal to the written UUID.

Received traces are saved in the subdirectory available at the address:

  • Linux: ~/.local/share/OMNIAgent/Traces
  • Windows: %LocalAppData%/OMNI Agent/Traces

The corresponding files conform to the BSON format when reading from the second byte.

Multi camera

Connecting the additional video stream (camera)

OMNI Agent supports connection to multiple video streams (cameras).
  • To connect an additional video stream via the OMNI Agent dashboard, click Add a camera on the cameras page and specify the IP/USB camera.
  • To add an additional video stream to OMNI Agent by editing the configuration file, follow the steps below:
    • In the config/run_params.json configuration file, create a new camera object in the web_cams array. To do this, simply copy the already existing camera object from the web_cams array.
    • Specify the IP address or ID of the new connected camera in the stream parameter of the new camera object. For USB cameras, you additionally need to specify the frame size: width and height.
    • Run and activate the OMNI Agent.

As a result, the OMNI Agent will transfer data to OMNI Platform from two video streams (cameras).

To connect and operate additional video streams, you will need a machine with the following design characteristics:

  • CPU: number of cores = 1 core + 3 x number of video streams @3GHz with AVX support. When one of the modules (face/body) is disabled, 1 CPU core is removed. In this case, the number of cores = 1 + 2 x number of video streams.
  • RAM: 1GB + 1 GB x number of video streams. The amount of free RAM required is specified. If one of the modules (face/body) is disabled, the thread multiplier is expected to decrease by 25%. In this case, the amount of RAM = 1GB + 0.75GB x number of video streams.
  • HDD: 3GB free space.

Video files

To run OMNI Agent for processing video files, follow the steps below:

  1. Connect OMNI Agent to OMNI Platform (launch the Agent, enter the server URL, and the email and password of your OMNI Platform account).

  2. At Add a Camera page enter 123 in the URL field of IP camera.

  3. Close OMNI Agent by closing the terminal.

  4. Open config/run_params.json configuration file:

  • Windows: %LocalAppData%\OMNIAgent\config\run_params.json
  • Linux: ~/.local/share/OMNIAgent/config/run_params.json
  1. Enable "lock_cam_on_module_creation" field.

    "lock_cam_on_module_creation": true
  2. In the "web_cams" section, in the "stream" field, specify the path to the video file.

  3. Add the fields below:

    • "is_benchmark_camera": true
    • "is_nonlocking_camera": true
  4. Specify values for "title" and "real_name" fields (optional). An example of the filled -in “web_cams” section for running OMNI Agent to process a video file:

    "web_cams": [
    {
    "color_camera": {
    "creationDate": "2023-12-12T08:32:30.879147+00:00",
    "frame_crop": {
    "height": 1,
    "width": 1,
    "x": 0,
    "y": 0
    },
    "id": "c59fe4cb-5e9a-4bcb-a34b-e88879d1d692",
    "lastModified": "2023-12-12T08:32:30.879109+00:00",
    "real_name": "",
    "stream": "/home/stranger/Downloads/test_video_office.mp4",
    "title": "test_vid",
    "type": "IP",
    "is_benchmark_camera": true,
    "is_nonlocking_camera": true
    }
    }
    ]
    tip

    The path in the "stream" field must contain only Latin letters and numbers. When filling out the field on Windows, use backslashes: "C:\\Users\\vikki\\Desktop\\test_video_office.mp4".

  5. Run OMNI Agent.

    OMNI Agent will stop after video file processing is completed. Processing results in the format of processes and/or events will be transferred to OMNI Platform or third-party service via webhooks.
    note

    For stable operation of OMNI Agent, it is not recommended to process multiple video files at the same time.