Skip to main content
Version: 3.23.1

Human pose estimation

The process of human pose estimation in Face SDK consists of body detection and pose estimation.

Body detection

FaceSDK can use processing blocks for body detection:

  • *HUMAN_BODY_DETECTOR* (Body Detector)
  • *OBJECT_DETECTOR* (ObjectDetector)

Modifications and versions

Type Modification Version Face SDK version Detection time CPU (ms)*
640x4801280x7201920x1080
HUMAN_BODY_DETECTORssyv13.19238 236 237
OBJECT_DETECTORssyx13.192095 2031 2036
* - CPU Intel Xeon E5-2683 v4 (single-core)

Processing Block specification

  • The input Context must contain an image in binary format
{
"image" : {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [height, width, channels]
}
}

Example

Create a Processing Block

Create a detector and fitter processing blocks using the FacerecService.createProcessingBlock method, passing a Context container with set parameters as an argument.

auto detectorConfigCtx = service->createContext();
detectorConfigCtx["unit_type"] = "HUMAN_BODY_DETECTOR";
pbio::ProcessingBlock bodyDetector = service->createProcessingBlock(detectorConfigCtx);

Detector inference

We need to feed a Context container with a binary image into the processing unit of the detector:

ioData["image"] = imgCtx;
bodyDetector(ioData);

The result of performing face detection is stored by the passed Context container according to the specification of the processing block.

Human pose estimation

The HUMAN_POSE_ESTIMATOR Processing Block is used to estimate the human pose.

Modifications and versions

Modification Version Face SDK version Detection time CPU (ms)* Detection time GPU (ms)**
heavy13.161936
* - CPU Intel Xeon E5-2683 v4 (single-core)
** - GPU (NVIDIA GTX 10xx series)

Specification of human pose estimation processing block

  • The Context input container should contain an image in binary format and an array of objects received from the detector.
{
"image" : {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [height, width, channels]
},
"objects": [{
"id": {"type": "long", "minimum": 0},
"class": "body",
"confidence": {"double", "minimum": 0, "maximum": 1},
"bbox": [x1, y2, x2, y2]
}]
}

Example

  1. Create a pose estimation processing block object using the FacerecService.createProcessingBlock method, passing a Context container with set parameters as an argument.
auto configCtx = service->createContext();
configCtx["unit_type"] = "HUMAN_POSE_ESTIMATOR";
pbio::ProcessingBlock humanPoseEstimator = service->createProcessingBlock(configCtx);
  1. Perform human detection with BodyDetector or ObjectDetector as described in Start Detection.

  2. Pass the resulting Context container to the humanPoseEstimator() method:

humanPoseEstimator(ioData);