Skip to main content
Version: 2.6.0

Processes and events

OMNI Agent converts the recognition and tracking data into the formats of processes and events for subsequent sending to OMNI Platform or third-party services.

Processes

A data format that represents recognition and tracking results within time intervals.

Each process contains the following set of data:

  • Process ID
  • Process time interval (start and end of the process)
  • Process type (track, human, face, body, emotion etc.)
  • Detection object with face attributes (object is a human in the camera’s field of view)
  • Best shot

Processes are sent to OMNI Platform via the HTTP protocol or to a third-party service via webhook / MQTT.

How to enable or disable process submission

Setting via configuration file

Set the events.enable_activities parameter value to true (the module is enabled) or false (the module is disabled).

Setting up via dashboard

Click on Settings icon at OMNI Agent dashboard and enable or disable Send processes option in the opened tab.

Anonymous mode

OMNI Agent has the anonymous mode enabled by default. In the anonymous mode, face images aren’t transferred to OMNI Platform that ensures personal data protection and excludes the possibility of person recognition outside the system.

You can enable or disable the anonymous mode at OMNI Agent dashboard or by editing config/run_params.json.

Setting through configuration file

For configuration via config/run_params.json set the value for anonymous_mode:(Boolean) as true (module enabled) or false (module disabled).

Setting via dashboard

Click on Settings icon at OMNI Agent web page and enable or disable Anonymous mode option.

Send processes via webhooks

You can add a webhook in Settings of OMNI Agent dashboard or open the config/run_params.json configuration file and specify values for the webhook_tracking_subscribers parameter. This parameter represents a list of objects to determine the address of the server that receives data from OMNI Agent and is available in two versions:

Version 1

{
"url": "http://127.0.0.1:5000/trigger" // address with the endpoint
}

Version 2 (default)

{ 
"host": "127.0.0.1", // address
"port": "5000", // port
"is_secured": false, // http/https flag
"path": "/trigger" // endpoint
}

An array of processes is sent every 5 seconds (by default) as well as at the beginning and end of tracking, when a person is matched with a database, when a person gets into ROI or crosses marked lines. To change the process transfer interval, edit the ongoings_interval_in_msec field in the config/run_params.json file (measured in milliseconds).

How to send best shots via webhooks

Best shot is the best face crop in the track in terms of image quality. To send bestshots via webhooks, enable the Send bestshots in webhook option in the Settings of the OMNI Agent dashboard, or specify the value true for the enable_webhook_image2jpg_conversion field. The default value is false. The bestshot is encoded in base64 as a jpeg image.

Send processes via MQTT

  1. Download and install the MQTT brocker from here

  2. Open config/run_params.json file and enter the values for the following fields:

    • mqtt_settings – parameters for connecting to the MQTT broker
      • enablebool
      • brocker_addressstring, address of the MQTT broker that receives processes. The current implementation will only work with localhost
      • mqtt_portint, the port on which the MQTT broker listens for connections
      • client_identifierstring, unique MQTT client identifier that will be used when connecting to the broker
      • mqtt_topicstring, the topic in the MQTT broker to which processes will be sent

    Example of filled fields:

    "mqtt_settings": {
    "brocker_address": "localhost",
    "client_id": "default_mqtt_client",
    "mqtt_port": 1883,
    "mqtt_topic": "events",
    "enable": true
    }
  3. Start the Mosquitto MQTT service (on Linux system):

    sudo /etc/init.d/mosquitto start

    Basic commands:

    • mosquitto_sub: a command in the MQTT message broker that is used to subscribe to topics ("mqtt_topic") and receive messages published to those topics.
    • -h <hostname>: specifies the hostname (or IP address) of the MQTT broker to connect to.
    • -p <port>: specifies the port on which the MQTT broker listens (default 1883).
    • -t <topic>: specifies the topic you want to subscribe to.

    Usage example:

    mosquitto_sub -h localhost -p 1883 -t events

    This command will connect to the MQTT broker on localhost, subscribe to the "events" topic, and print received messages to the screen.

Events

A data format that represents recognition and tracking results as facts. OMNI Agent can generate the following types of events:

  • Identification/non-identification events (a person in the camera's field of view is identified/non-identified)
  • ROI events (a person is in/out of ROI)
  • Line crossing events (a person crossed the line marked on the video stream in the forward/reverse direction)
  • HAR events (a person has fallen, sits, lies or is involved in a fight)
Event structures

Structure of Identification Event:

  • type – identification
  • date – event time
  • id – event identifier
  • object (the object that generated the event):
    • class – object class (for example, human)
    • id – object identifier
  • parents (event parent, usually a process):
    • id – event parent identifier
    • type – type of the event parent (for example, human, face)
  • source – video stream identifier
  • trigger_source:
    • id
  • image – cropped image
  • original_image – full frame
  • identification_data – array of identification candidates (profiles):
    • profile_id – profile identifier
    • group_ids – identifier of the group to which the profile was added
    • score – degree of similarity of the face from the frame with the face from the profile from 0 (0%) to 1 (100%)
    • far – False Acceptance Rate when the system mistakes images of different people as images of the same person
    • frr – False Rejection Rate when the system mistakes two images of the same person as images of different people
    • distance – distance between compared template vectors. The smaller the value, the higher the confidence in correct recognition

Structure of Non-Identification Event:

  • type – non_identification
  • date – event time
  • id – event identifier
  • object (the object that generated the event):
    • class – object class (for example, human)
    • id – object identifier
  • parents (event parent, usually a process):
    • id – event parent identifier
    • type – type of the event parent (for example, human, face)
  • source – video stream identifier
  • trigger_source:
    • id
  • image – cropped image
  • original_image – full frame

Structure of ROI Event:

  • type – roi
  • date – event time
  • id – event identifier
  • parents (event parent, usually a process):
    • id – event parent identifier
    • type – type of the event parent (for example, human, body)
  • source – video stream identifier
  • trigger_source:
    • id
  • image – cropped image
  • original_image – full frame
  • roi_data:
    • direction (a person is in/out of ROI) – in/out

Structure of Line Crossing Event:

  • type – crossing
  • date – event time
  • id – event identifier
  • parents (event parent, usually a process):
    • id – event parent identifier
    • type – type of the event parent (for example, human, body)
  • source – video stream identifier
  • trigger_source:
    • id
  • image – cropped image
  • original_image – full frame
  • crossing_data:
    • direction (a person crossed the line the forward/reverse direction) – in/out

Structure of HAR Event:

  • type – har
  • date – event time
  • id – event identifier
  • parents (event parent, usually a process):
    • id – event parent identifier
    • type – type of the event parent (for example, human, body)
  • source – video stream identifier
  • trigger_source:
    • id
  • image – cropped image
  • original_image – full frame
  • har_data:
    • action – fight, fall, sit, lie

Events are sent to OMNI Platform via the HTTP protocol. OMNI Platform can transfer the received events to a third-party service via web sockets. Fot more information see OMNI Platform documentation.

Important

Events are available only for OMNI Agent Online.

How to enable or disable event submission

Setting through configuration file

For configuration via config/run_params.json set the value for enable_activities, enable_identification, enable_non_identification, enable_har, enable_line_crossing and enable_roi_crossing as true (module enabled) or false (module disabled).

In the identification events, data on candidates – profiles from the database with which the identification occurred (a face from the submitted frame and a face from the database have a high degree of similarity) are transmitted. The number of allowable candidates can be specified in the field pipelines.face.video_worker_override_parameters.search_k: int. The default value is 3.

Setting via dashboard

Click on Settings icon at OMNI Agent dashboard and enable types of events for submission to OMNI Platform.

img.png

Event cooldown

Event cooldown is a time interval that is counted from the first detection of an event, and during which sending of repeated HAR, identification and non-identification events (duplicates), caused by changes in perspectives, head rotations, people crossing tracks or brief obstructions, is suspended.

To configure cooldowns, open the config/run_params.json file and change the values in the following fields:

  • events.har_cooldown_interval: int - default value 5000 – Cooldown of HAR events (in ms).
  • events.same_human_identification_cooldown_interval: int – default value 5000 – Cooldown of identification events (in ms).