Skip to main content
Version: 2.9.1 (latest)

Processes and events

OMNI Agent converts the recognition and tracking data into the formats of processes and events for subsequent sending to OMNI Platform or third-party services.

Processes

A data format that represents recognition and tracking results within time intervals.

Each process contains the following set of data:

  • Process ID
  • Process time interval (start and end of the process)
  • Process type (track, human, face, body, emotion etc.)
  • Detection object with face attributes (object is a human in the camera’s field of view)
  • Best shot

Process specification

Process structure

Enable / disable process submission

Setting through configuration file

Set the events.enable_activities parameter value to true (the module is enabled) or false (the module is disabled).

Setting via web interface

Click on Settings icon at OMNI Agent web interface and enable or disable Send processes option in the opened tab.

Anonymous mode

OMNI Agent has the anonymous mode enabled by default. In the anonymous mode, face images aren’t transferred to OMNI Platform that ensures personal data protection and excludes the possibility of person recognition outside the system.

You can enable or disable the anonymous mode at OMNI Agent web interface or by editing config/run_params.json.

Setting through configuration file

For configuration via config/run_params.json set the value for anonymous_mode:(Boolean) as true (module enabled) or false (module disabled).

Setting via web interface

Click on Settings icon at OMNI Agent web page and enable or disable Anonymous mode option.

Send processes via webhooks

Setting via web interface

img.png

Setting through configuration file

Open the file config/run_params.json and specify values for the parameter webhook_tracking_subscribers. This parameter is a list of objects describing the server address that receives processes or events from OMNI Agent:

{
"type": "processes", // type (or leave this field blank)
"url": "http://127.0.0.1:5000/trigger" // address with endpoint
}

An array of processes is sent every 5 seconds (by default) as well as at the beginning and end of tracking, when a person is matched with a database, when a person gets into ROI or crosses marked lines. To change the process transfer interval, edit the ongoings_interval_in_msec field in the config/run_params.json file (measured in milliseconds).

Send processes via MQTT

  1. Download and install the MQTT brocker from here

  2. Open config/run_params.json file and enter the values for the following fields:

    • mqtt_settings – parameters for connecting to the MQTT broker
      • enablebool
      • brocker_addressstring, address of the MQTT broker that receives processes. The current implementation will only work with localhost
      • mqtt_portint, the port on which the MQTT broker listens for connections
      • client_identifierstring, unique MQTT client identifier that will be used when connecting to the broker
      • default_mqtt_topicstring, the topic in the MQTT broker to which processes will be sent

    Example of filled fields:

    "mqtt_settings": {
    "brocker_address": "localhost",
    "client_id": "default_mqtt_client",
    "mqtt_port": 1883,
    "default_mqtt_topic": "events",
    "enable": true
    }
  3. Start the Mosquitto MQTT service (on Linux system):

    sudo /etc/init.d/mosquitto start

    Basic commands:

    • mosquitto_sub: a command in the MQTT message broker that is used to subscribe to topics ("default_mqtt_topic") and receive messages published to those topics.
    • -h <hostname>: specifies the hostname (or IP address) of the MQTT broker to connect to.
    • -p <port>: specifies the port on which the MQTT broker listens (default 1883).
    • -t <topic>: specifies the topic you want to subscribe to.

    Usage example:

    mosquitto_sub -h localhost -p 1883 -t events

    This command will connect to the MQTT broker on localhost, subscribe to the "events" topic, and print received messages to the screen.

Events

A data format that represents recognition and tracking results as facts. OMNI Agent can generate the following types of events:

  • Face events:
    • Identification events (a person from the camera is found in the database).
    • Non-identification events (a person from the camera is not found in the database).
    • Face hide control events (a person is within a Region of Interest (ROI) or crosses a marked virtual line, while hiding their face to avoid recognition).
  • HAR (Human Action Recognition) events:
    • Falling
    • Fighting
    • Lying
    • Sitting
  • Regions events:
    • ROI events (a person is within or exits a Region of Interest).
    • Line crossing events (a person crosses a marked virtual line in the video stream in either direction).

Events are sent to OMNI Platform via HTTP protocol. Then, you can configure the transmission of these events from OMNI Platform to external services through websockets.

Important

Events are available only for OMNI Agent Online.

Event specification

Event structure

Description of event structure fields

Structure of Identification Event:

  • type – identification
  • date – event time
  • id – event identifier
  • object (the object that generated the event):
    • class – object class (for example, human)
    • id – object identifier
  • parents (event parent, usually a process):
    • id – event parent identifier
    • type – type of the event parent (for example, human, face)
  • source – video stream identifier
  • trigger_source:
    • id
  • image – cropped image
  • image_bbox — normalized detection coordinates.
    • [0] - x1 coordinate
    • [1] - y1 coordinate
    • [2] - x2 coordinate
    • [3] - у2 coordinate
  • original_image – full frame (optinally)
  • identification_data – array of identification candidates (profiles):
    • profile_id – profile identifier
    • group_ids – identifier of the group to which the profile was added
    • score – degree of similarity of the face from the frame with the face from the profile from 0 (0%) to 1 (100%)
    • far – False Acceptance Rate when the system mistakes images of different people as images of the same person
    • frr – False Rejection Rate when the system mistakes two images of the same person as images of different people
    • distance – distance between compared template vectors. The smaller the value, the higher the confidence in correct recognition

Structure of Non-Identification Event:

  • type – non_identification
  • date – event time
  • id – event identifier
  • object (the object that generated the event):
    • class – object class (for example, human)
    • id – object identifier
  • parents (event parent, usually a process):
    • id – event parent identifier
    • type – type of the event parent (for example, human, face)
  • source – video stream identifier
  • trigger_source:
    • id
  • image – cropped image
  • image_bbox — normalized detection coordinates.
    • [0] - x1 coordinate
    • [1] - y1 coordinate
    • [2] - x2 coordinate
    • [3] - у2 coordinate
  • original_image – full frame (optinally)

Structure of ROI Event:

  • type – roi
  • date – event time
  • id – event identifier
  • parents (event parent, usually a process):
    • id – event parent identifier
    • type – type of the event parent (for example, human, body)
  • source – video stream identifier
  • trigger_source:
    • id
  • image – cropped image
  • image_bbox — normalized detection coordinates.
    • [0] - x1 coordinate
    • [1] - y1 coordinate
    • [2] - x2 coordinate
    • [3] - у2 coordinate
  • original_image – full frame (optinally)
  • roi_data:
    • direction (a person is in/out of ROI) – in/out

Structure of Line Crossing Event:

  • type – crossing
  • date – event time
  • id – event identifier
  • parents (event parent, usually a process):
    • id – event parent identifier
    • type – type of the event parent (for example, human, body)
  • source – video stream identifier
  • trigger_source:
    • id
  • image – cropped image
  • image_bbox — normalized detection coordinates.
    • [0] - x1 coordinate
    • [1] - y1 coordinate
    • [2] - x2 coordinate
    • [3] - у2 coordinate
  • original_image – full frame (optinally)
  • crossing_data:
    • direction (a person crossed the line the forward/reverse direction) – in/out

Structure of HAR Event:

  • type – har
  • date – event time
  • id – event identifier
  • parents (event parent, usually a process):
    • id – event parent identifier
    • type – type of the event parent (for example, human, body)
  • source – video stream identifier
  • trigger_source:
    • id
  • image – cropped image
  • image_bbox — normalized detection coordinates.
    • [0] - x1 coordinate
    • [1] - y1 coordinate
    • [2] - x2 coordinate
    • [3] - у2 coordinate
  • original_image – full frame (optinally)
  • har_data:
    • action – fight, fall, sit, lie

Structure of Face Hide Control Event:

  • type (event type) — face_hide_control.
  • date — event time.
  • face_hide_control_data:
    • group_ids — identifier of the group to which the profile is added.
    • match_event_id — identifier of the identification event.
    • no_suitable_face — flag indicating whether a face that needs to be found in the database was detected.
    • profile_id — identifier of the profile from the database.
    • status — status of the control check.
  • id — event identifier.
  • image — detection crop.
  • image_bbox — normalized coordinates of the detection.
    • [0] - x1 coordinate
    • [1] - y1 coordinate
    • [2] - x2 coordinate
    • [3] - у2 coordinate
  • parents (parent event, usually a process):
    • id — identifier of the parent event.
    • type — type of the parent event (e.g., human or body).
  • source — video stream identifier.
  • trigger_source (trigger data that initiated the event, e.g., added line crossing or ROI):
    • id — trigger identifier.
    • name — trigger name.

Enable / disable event submission

Setting through configuration file

To enable or disable events via the configuration file, open the file config/run_params.json and set the values of the following parameters as true (module enabled) or false (module disabled):

  • events.enable_activities (activity events)
  • events.face_events.identification.enable (identification events)
  • events.face_events.non_identification.enable (non-identification events)
  • events.face_events.face_hide_control.enable (face hide control events)
  • events.har_events.fall.enable (HAR events: falling)
  • events.har_events.fight.enable (HAR events: fights)
  • events.har_events.lie.enable (HAR events: lying)
  • events.har_events.sit.enable (HAR events: sitting)
  • events.region_events.line_crossing.enable (line crossing events)
  • events.region_events.roi_crossing.enable (ROI entry/exit events)

Setting via web interface

Click on Settings icon at OMNI Agent web interface and enable types of events for submission to OMNI Platform.

img.png

Send events via webhooks

Setting via web interface

img.png

Setting through configuration file

Open the file config/run_params.json and specify values for the parameter webhook_tracking_subscribers. This parameter is a list of objects describing the server address that receives processes or events from OMNI Agent:

{
"type": "event", // type
"url": "http://127.0.0.1:5000/trigger" // address with endpoint
}

Event cooldown (event resending delay)

For identification and non-identification events

When a person appears in the frame, an identification event (if the face is found in the database) or a non-identification event (if the face is not found in the database) is created. When the person's track is interrupted (the person exits the frame or goes behind an obstacle), a predefined time interval starts (default is 5 seconds). If the person returns to the frame within this time interval, OMNI Agent determines that the person's track continues, so there's no need to generate another event. However, if the person does not return to the frame within this time interval or returns later, OMNI Agent starts a new track and generates a new identification event for that person.

To adjust the delay, open the file config/run_params.json and modify the value in the field:

events.face_events.identification.same_human_identification_cooldown_interval: int, default value is 5000 (cooldown of identification/non-identification events in milliseconds).

For HAR events

In the case of falling, lying down, sitting, or fighting, OMNI Agent creates a HAR event. After this, a time interval starts (default is 5 seconds), during which new events for the same action are not generated. If the action (fighting, falling, lying down, sitting) continues after this time interval, a new event for it is not created, thereby saving computational resources of the system.

To adjust the delay, open the file config/run_params.json and modify the values in the following fields:

  • Fall: events.har_events.fall.cooldown: int, default value is 0 (cooldown of fall events in milliseconds).
  • Fight: events.har_events.fight.cooldown: int, default value is 500 (cooldown of for fight events in milliseconds).
  • Lie: events.har_events.lie.cooldown: int, default value is 5000 (cooldown of lie events in milliseconds).
  • Sit: events.har_events.sit.cooldown: int, default value is 5000 (cooldown of sit events in milliseconds).

Confident detection timeout

note

This parameter is configurable only for HAR events: falling, fighting, sitting, lying.

The event generation parameter specifying how long a process should last before an event can be sent for it. The confident detection timeout allows OMNI Agent to send only confident events, i.e., events generated by processes that continuously persist for the specified timeout duration.

For example, in a smart city/enterprise security scenario, the system needs to react only to people who have been sitting for a prolonged period (a person sitting by a door indicating a burglary threat, or sitting/lying near equipment indicating a threat of sabotage, intentional damage). A confident detection timeout of ~5 seconds helps to prevent sending events in cases where a person is just sitting briefly to tie shoelaces, pick up a fallen object, etc.

The confident detection timeout can be configured when enabling the sending of HAR events in the Settings of the OMNI Agent web interface. Default values:

img.png