Skip to main content
Version: 3.19.0

Human Pose Estimation

In this section you will learn how to integrate Human Pose Estimator to your C++ or Python project.

Human Pose Estimation (C++/Python)

Requirements

  • Windows x86 64-bit or Linux x86 64-bit system.
  • Installed Face SDK package windows_x86_64 or linux_x86_64 (see Getting Started).

1. Creating a Human Pose Estimator

1.1 To create a Human Pose Estimator, follow steps 1-3 described in Creating a Processing Block and specify the values:

  • "HUMAN_POSE_ESTIMATOR" for the "unit_type" key;
  • path to Human Body Detector model file for the "model_path" key;
  • path to the file that describes skeleton structure for the key "label_map".
configCtx["unit_type"] = "HUMAN_POSE_ESTIMATOR";

// default path to Human Body Detector model file - "share/humanpose/hpe-td.enc" in the Face SDK's root directory
configCtx["model_path"] = "share/humanpose/hpe-td.enc";
// auxiliary file describing the structure of the skeleton
configCtx["label_map"] = "share/humanpose/label_map_keypoints.txt";

1.2 Create a Human Pose Estimation block:

pbio::ProcessingBlock humanPoseEstimator = service->createProcessingBlock(configCtx);

2. Human Pose Estimation

2.1 Perform the human detection with the BodyDetector or with the ObjectDetector as described in Body Detection

2.2 Pass the resulting Context container to the humanPoseEstimator call:

humanPoseEstimator(ioData);

The result of calling humanPoseEstimator() will be appended to ioData container. The format of the output data is presented as a list of objects with the "objects" key. Each list object has the "class" key with the "body" value. The "keypoints" key contains a list of keypoints, each of which contains a "proj" values, that are relative coordinates of the point and a "confidence" in a range of [0,1]. The order of the points corresponds to the description from the "label_map_keypoints.txt".

/*
{
"objects": [{ "id": {"type": "long", "minimum": 0},
"class": "body",
"confidence": {"type": "double", "minimum": 0, "maximum": 1},
"bbox": [x1, y2, x2, y2],
"keypoints": [
{"proj": {x_proj, y_proj}, "confidence": {"type": "double", "minimum": 0, "maximum": 1}}, ...
]
}]
}
*/

3. GPU Acceleration

Human Pose Estimator can be used with GPU acceleration (CUDA). For more information, please follow this link.