Skip to main content
Version: 2.4.1

Output Data

OMNI Agent detects faces and silhouettes, identifies people (if their faces are in the database) and tracks their positions in video streams from cameras and video files.OMNI Agent receives video streams and video files via the rtcp protocol, processes them and converts recognition and tracking data into two formats: processes and events.

Output Data Formats

Processes

A data format for representing recognition and tracking results implemented within time intervals.

Each process contains the following set of data:
  • Process ID;
  • Process time interval (start and end of the process);
  • Process type (track, human, face, body, emotion etc.);
  • Detection object with face attributes (object is a human in the camera’s field of view);
  • Best shot.

Processes are sent to OMNI Platform via the HTTP protocol or to an external service via a webhook.

Events

A data format for representing identification results, generated by OMNI Agent and transferred to OMNI Platform for subsequent sending to a third-party service via a web socket.

Depending on the identification results, OMNI Agent can send two types of events: identification events (the person from the camera is identified) and non-identification events (the person from the camera is not identified).

Structure of identification event:

  • type - identification
  • date - event time
  • id - event identifier
  • object (the object that generated the event):
    • class - object class (for example, human)
    • id - object identifier
  • parents (event parent, usually a process)
    • id - event parent identifier
    • type - type of the event parent (for example, human, face)
  • source - video stream identifier
  • trigger_source:
    • id
  • image - cropped image
  • original_image - full frame
  • identification_data - array of identification candidates (profiles):
    • profile_id - profile identifier
    • group_ids - identifier of the group to which the profile was added
    • score - degree of similarity of the face from the frame with the face from the profile from 0 (0%) to 1 (100%).
    • far - False Acceptance Rate when the system mistakes images of different people as images of the same person
    • frr - False Rejection Rate when the system mistakes two images of the same person as images of different people
    • distance - distance between compared template vectors. The smaller the value, the higher the confidence in correct recognition.

Structure of non-identification event:

  • type (event type) - non_identification
  • date - event time
  • id - event identifier
  • object (the object that generated the event):
    • class - object class (for example, human)
    • id - object identifier
  • parents (event parent, usually a process)
    • id - event parent identifier
    • type - type of the event parent (for example, human, face)
  • source - video stream identifier
  • trigger_source:
    • id
  • image - cropped image
  • original_image - full frame

Output Data Transfer

img.png

Online Mode

Processes and events are transferred to OMNI Platform via HTTP protocol in real time. OMNI Agent's face database is synchronized with a face database of OMNI Platform on average once per minute.

Standalone Mode

Processes are transferred to third-party services via webhooks. To set up webhooks, follow Webhooks section.