The event specification has been expanded for deeper integrations with external systems: ROI (Region of Interest) and line crossing events now include data on UUID, name, and points of the ROI/line.
Introduced a threshold check for the confidence of keypoint detection in skeleton tracking. This will allow fine-tuning the detection accuracy for specific business tasks.
Fixed incorrect triggering of face hide control events when crossing a line in the opposite direction.
Now only finalized line crossings are processed.
It is now possible to process the video file in a way that events are sent to the server with the date corresponding to the video file's recording time, rather than the current date.
Added support for using regions of interest and intersection lines for detecting faces and body parts. Face or body part tracking within a marked area is done by the center of detection, while body tracking is based on the center of the lower boundary of the bbox.
Added the ability to receive raw detection data from the face and body pipeline via the MQTT event queue.
Optimized profile reading: Fixed an issue that caused slow reading of profiles from the file system during operations with the profile database.
Efficient profile database update: Now when updating the profile database in OMNI Platform, only the changes (diff), not the entire database, are updated in the OMNI Agent profile database.
Added facial concealment detection (BETA). Now, OMNI Agent can identify instances where individuals attempt to hide their faces (by turning away or using scarves, hats, glasses, or masks) while passing through controlled areas. The new detector enables swift responses to evasion attempts from facial recognition systems and aids in compiling statistics on such disciplinary breaches in workforce time management systems.
Added the ability to send events via webhooks directly from OMNI Agent. No longer limited to OMNI Platform database, this enhancement offers flexibility for seamless integration with client infrastructures.
Added the ability to enable / disable specific HAR events such as falls, fights, sitting and lying, and tailor the confident detection time for each event. Customize your tracking preferences and define the duration threshold for significant events, ensuring only precise and relevant alerts for your operational needs.
Removed the ability to enable event sending for disabled pipelines in OMNI Agent. For example, previously, when the HAR toggle was disabled in Settings, you could still enable the "Send HAR events" toggle. Now, this is no longer possible.
Fixed rare connection errors of OMNI Agent to OMNI Platform via secure connection (HTTPS). Very rarely, the Agent could not connect to the Platform because it could not find the Platform's certificate in its list of trusted certificates.
Fixed a bug where Hikvision emulation settings would reset upon reactivating OMNI Agent on OMNI Platform, unless OMNI Agent was restarted after modifying them.
Updated HAR model for enhanced human action recognition accuracy.
Introducing the feature to process video files directly from the web interface.
New option to save storage space in OMNI Platform by disabling the transmission of original frames in events.
Now object coordinates from the original frame are included in events, enabling seamless person movement tracking without transmitting original frames.
Sending facial data in Hikvision smart camera format from OMNI Agent, ensuring smooth integration of standard and "smart" cameras for streamlined image processing.
Optimized template generation setting via the web interface. You can now work with the initial biometric template (built from the first best shot of the face) or initiate a search for the best shots across the entire track to generate higher-quality templates.
HAR (Human Activity Recognition) and ROI (Region of Interest) detections are now transmitted to OMNI Platform as event-service events. This provides a unified integration approach for data transmission to external systems for both face identification/non-identification and action recognition.
It's now possible to eliminate duplicate identification and human action recognition events caused by changes in perspectives, head rotations, people crossing tracks, or brief obstructions (event cooldown).
Added the ability to limit the camera's field of view through configuration files and the dashboard. This helps exclude duplicate detections/identifications from cameras whose fields of view intersect and overlap, as well as to eliminate false detections/identifications at the edges of the lens where optical distortion occurs.
Added full support for body tracking on ARM64 (Jetson), along with MQTT protocol support for output data and the ability to configure process transmission via MQTT. This makes it easier to integrate OMNI Agent with end devices in solutions where OMNI Platform server is not required.
It's now possible to change videoworker settings on OMNI Agent through configuration files to ensure optimal performance for the required quality of face detection and identification.
Previews are now only rendered when viewing a camera in the web configurator, saving processor resources.
Improved body tracking and HAR (Human Activity Recognition) detection quality.
The web configurator interface has been enhanced based on user feedback to expedite learning and simplify OMNI Agent setup. For example: enabling CUDA, configuring camera visibility zones, binding licenses for Standalone mode, setting face identification thresholds, and more.
Fixed incorrect naming of identification fields far and frr in identification events and activities.
Added human activity recognition (HAR): falls, fights, lying and sitting. Now you can build business solutions to identify situations in which people need help.
Added OMNI Agent installation without GUI, which allows you to install it on server machines, as well as remotely via ssh;
Added people tracking on a floor map. This feature allows you to build high-level business intelligence (heat maps of people flows, people coming close to potentially hazardous areas, or getting into advertising areas, etc.).
Added the ability to recognize and track people who get into the regions of interest (ROI) or cross the lines marked on the frame from camera. Now you can organize perimeter security without fences and other physical barriers, control people coming close to the cliffs and bridge railings, react to people in certain camera areas for advertising purposes, etc.
Added the ability to deploy OMNI Agent in an isolated environment as a standalone application without connection to OMNI Platform. This feature allows you to use the API in third-party business solutions in the silhouette and face detection mode (without identification using OMNI Platform), reducing the cost of maintaining the server part.
Support for the ARM64 platform including NVIDIA Jetson (for face detection and tracking). Now it's possible to run the application on hardware with more computing power.
The installer has been reduced by half, as support for 11v1000 and earlier facial recognition methods has been discontinued. Faster downloading and saving of disk space after installation.
Added collection of "traces" (detailed logs) to analyze the results of video processing and identify problems.
Body detection pipeline now uses no more than 2 cores per thread instead of using all available ones. Now hardware resources are used more efficiently.
Fixed a rare overflow of the internal queue of OMNI Agent modules on slow machines. As a result, it was possible to optimize the use of hardware resources and improve the stability of OMNI Agent.
Fixed flickering body detections in the preview so that the user does not have a feeling of instability of the functionality.
OMNI Agent uptime is now shown in the web interface. Now you can find out about instability caused by external factors about which OMNI Agent cannot generate an error;
Guaranteed delivery of events when OMNI Agent works in networks with an unstable connection.