Scenario settings
What is a video analytics scenario
A video analytics scenario is a preconfigured setup that can be quickly applied to a camera or video file. Each scenario already includes everything required to implement standard computer vision functionality:
Objects to detect in the frame (for example, faces, bodies, mobile phones and others)
Actions to perform on those objects (tracking, identification, behavioral analytics)
Computer vision models and algorithms responsible for executing these tasks
Instead of manually connecting individual modules and setting recognition parameters, you simply select a video analytics scenario from the list, and OMNI Agent automatically applies all necessary settings and starts analyzing video according to the selected scenario — whether it’s face recognition, body tracking, fight detection, or other tasks.
Advantages
Video analytics scenarios allow you to build a flexible, multifunctional monitoring system. Each camera can be configured with its own scenario or a combination of scenarios — for example, one camera detects faces and bodies, another monitors suspicious behavior, and a third tracks mobile phone appearances.
How to add a video analytics scenario
You can connect and configure a scenario for each camera through the OMNI Agent web interface:
Hover over the camera row on the left side of the screen and click the
⚙️icon.In the opened window click
+ Add a scenarioand select the desired scenario from the dropdown list.Click
Saveat the bottom of the page to apply the changes.
Video analytics scenario settings
To modify scenario parameters, click the ✏️ icon next to the scenario in the list.
Settings are grouped into four categories:
Processing Configuration — performance and accuracy parameters of the scenario
Output Data — data transfer settings and enabling anonymous mode
Regions of Interest (ROI) — adding ROIs and crossing lines
Events — configuring data transmission to OMNI Platform and external systems
Processing configuration
This group includes settings for performance and detection sensitivity. The available options depend on the selected scenario:
Hardware Acceleration: Enable GPU acceleration using CUDA or TensorRT for faster processing.
Detection and Tracking Confidence Thresholds: Set confidence levels for object actions. For example, perimeter control scenarios have separate thresholds for detection, tracking start, and tracking continuation.
Object Size Filtering: Specify minimum object sizes to include in analysis. For instance, in unauthorized face detection, you can set a minimum face size for optimal frame capture.
See the ready-made scenarios for detailed processing configurations.
Output data settings
A key option is Anonymous Mode: When enabled, object images (e.g., faces or bodies) are not transmitted outside OMNI Agent, ensuring compliance with data privacy requirements.
Regions of interest (ROI)
A Region of Interest (ROI) is an area marked over the video frame where object presence triggers events. OMNI Agent detects and tracks objects of specified classes in the ROI. Detection and tracking results are sent to OMNI Platform or an external service in the form of processes and events.
Multiple object classes can be tracked in the same ROI. ROIs are drawn by marking at least three points on the frame. The marked area is shown in white; when at least one object enters, it changes to turquoise.
Object presence in an ROI is determined by the coordinates of a single point derived from the object’s bounding box (bbox).
Adding ROIs
In the scenario settings, go to ROI/Line Settings and click
Add ROI.Enter the ROI name, select type Region, choose object classes to track, draw the area on the camera preview, and click Save.
The ROI will now appear on the camera preview.
Adding Crossing Lines
In addition to ROIs, you can add crossing lines over the frame. When an object fully crosses the line, OMNI Agent sends detection and tracking results to OMNI Platform or an external service.
Lines are light red by default and turn red when crossed by an object.
To add a line: Go to ROI/Line Settings, click Add ROI, select type Line, choose tracked classes, mark two points, set the abstract zone thickness, and click Save.
Event settings
An event represents the outcome of a scenario, reflecting actions such as detecting an unauthorized face, a body entering a controlled zone, or behavioral analytics.
Settings for events, common to all scenarios, include:
Enable Line Crossing Event Reporting
Repeat Event Skip Interval (ms): Default is 0. Repeat crossing events for a given line will not be sent within the specified interval.
Minimum Relative Object Width: Default is 0. The minimum width of a tracked object relative to the full frame for line crossing events.
Maximum Relative Object Width: Default is 1. The maximum width of a tracked object relative to the full frame for line crossing events.
Minimum Relative Object Height: Default is 0. The minimum height of a tracked object relative to the full frame for line crossing events.
Maximum Relative Object Height: Default is 1. The maximum height of a tracked object relative to the full frame for line crossing events.
Enable ROI Event Reporting
Entry Confirmation Time (ms): Default is 1500. Objects staying in the ROI for less than this duration will be ignored (ROI entry events will not be triggered).
Repeat Event Skip Interval (ms): Default is 0. Repeat ROI events for a given area will not be sent within the specified interval.
Minimum Relative Object Width: Default is 0. The minimum width of a tracked object relative to the full frame for ROI events.
Maximum Relative Object Width: Default is 1. The maximum width of a tracked object relative to the full frame for ROI events.
Minimum Relative Object Height: Default is 0. The minimum height of a tracked object relative to the full frame for ROI events.
Maximum Relative Object Height: Default is 1. The maximum height of a tracked object relative to the full frame for ROI events.