Human pose estimation
The process of human pose estimation in Face SDK consists of body detection and pose estimation.
Body detection
FaceSDK can use processing blocks for body detection:
*HUMAN_BODY_DETECTOR*
(Body Detector)*OBJECT_DETECTOR*
(ObjectDetector)
Modifications and versions
Type | Modification | Version | Face SDK version | Detection time CPU (ms)* | ||
---|---|---|---|---|---|---|
640x480 | 1280x720 | 1920x1080 | ||||
HUMAN_BODY_DETECTOR | ssyv | 1 | 3.19 | 238 | 236 | 237 |
OBJECT_DETECTOR | ssyx | 1 | 3.19 | 2095 | 2031 | 2036 |
Processing Block specification
- Input
- Output
- The input Context must contain an image in binary format
{
"image" : {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [height, width, channels]
}
}
- Once the processing block is running, an array of objects will be added, each containing the coordinates of the bounding rectangle, the detection confidence, the class and the identifier in that array
{
"image" : {},
"objects": [{
"id": {"type": "long", "minimum": 0},
"class": "class",
"confidence": {"double", "minimum": 0, "maximum": 1},
"bbox": [x1, y2, x2, y2]
}]
}
Example
Create a Processing Block
Create a detector and fitter processing blocks using the FacerecService.createProcessingBlock
method, passing a Context container with set parameters as an argument.
- C++
- Python
- Flutter
- C#
- Java
auto detectorConfigCtx = service->createContext();
detectorConfigCtx["unit_type"] = "HUMAN_BODY_DETECTOR";
pbio::ProcessingBlock bodyDetector = service->createProcessingBlock(detectorConfigCtx);
detectorConfigCtx = {"unit_type": "HUMAN_BODY_DETECTOR",}
bodyDetector = service.create_processing_block(detectorConfigCtx)
Map<String, dynamic> configCtx = {"unit_type": "HUMAN_BODY_DETECTOR"};
ProcessingBlock bodyDetector = service.createProcessingBlock(configCtx);
Dictionary<object, object> configCtx = new();
configCtx["unit_type"] = "HUMAN_BODY_DETECTOR";
ProcessingBlock bodyDetector = service.CreateProcessingBlock(configCtx);
Context detectorConfigCtx = service.createContext();
detectorConfigCtx.get("unit_type").setString("HUMAN_BODY_DETECTOR");
ProcessingBlock bodyDetector = service.createProcessingBlock(detectorConfigCtx);
Detector inference
We need to feed a Context container with a binary image into the processing unit of the detector:
- C++
- Python
- Flutter
- C#
- Java
ioData["image"] = imgCtx;
bodyDetector(ioData);
ioData["image"] = imageCtx
bodyDetector(ioData)
ioData["image"].placeValues(imageContext);
bodyDetector.process(ioData);
ioData["image"] = imgCtx;
bodyDetector.Invoke(ioData);
ioData.get("image").setContext(imgCtx);
bodyDetector.process(ioData);
The result of performing face detection is stored by the passed Context container according to the specification of the processing block.
Human pose estimation
The HUMAN_POSE_ESTIMATOR
Processing Block is used to estimate the human pose.
Modifications and versions
Modification | Version | Face SDK version | Detection time CPU (ms)* | Detection time GPU (ms)** |
---|---|---|---|---|
heavy | 1 | 3.16 | 193 | 6 |
** - GPU (NVIDIA GTX 10xx series)
Specification of human pose estimation processing block
- Input
- Output
- The Context input container should contain an image in binary format and an array of objects received from the detector.
{
"image" : {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [height, width, channels]
},
"objects": [{
"id": {"type": "long", "minimum": 0},
"class": "body",
"confidence": {"double", "minimum": 0, "maximum": 1},
"bbox": [x1, y2, x2, y2]
}]
}
- Once the processing block is running, each object will be given a
keypoints
key that contains a list of keypoints, each containingproj
values, which are the relative coordinates of the point andconfidence
(confidence) in the range [0,0]. of the point and"confidence"
in the range[0,1].
{
"keypoints": {
"nose": {"proj" : [x, y], "confidence": {"type": "double", "minimum": 0, "maximum": 1}},
"left_eye": {"proj" : [x, y], "confidence": {"type": "double", "minimum": 0, "maximum": 1}},
"right_eye": {"proj" : [x, y], "confidence": {"type": "double", "minimum": 0, "maximum": 1}},
"left_ear": {"proj" : [x, y], "confidence": {"type": "double", "minimum": 0, "maximum": 1}},
"right_ear": {"proj" : [x, y], "confidence": {"type": "double", "minimum": 0, "maximum": 1}},
"left_shoulder": {"proj" : [x, y], "confidence": {"type": "double", "minimum": 0, "maximum": 1}},
"right_shoulder": {"proj" : [x, y], "confidence": {"type": "double", "minimum": 0, "maximum": 1}},
"left_elbow": {"proj" : [x, y], "confidence": {"type": "double", "minimum": 0, "maximum": 1}},
"right_elbow": {"proj" : [x, y], "confidence": {"type": "double", "minimum": 0, "maximum": 1}},
"left_wrist": {"proj" : [x, y], "confidence": {"type": "double", "minimum": 0, "maximum": 1}},
"right_wrist": {"proj" : [x, y], "confidence": {"type": "double", "minimum": 0, "maximum": 1}},
"left_hip": {"proj" : [x, y], "confidence": {"type": "double", "minimum": 0, "maximum": 1}},
"right_hip": {"proj" : [x, y], "confidence": {"type": "double", "minimum": 0, "maximum": 1}},
"left_knee": {"proj" : [x, y], "confidence": {"type": "double", "minimum": 0, "maximum": 1}},
"right_knee": {"proj" : [x, y], "confidence": {"type": "double", "minimum": 0, "maximum": 1}},
"left_ankle": {"proj" : [x, y], "confidence": {"type": "double", "minimum": 0, "maximum": 1}},
"right_ankle": {"proj" : [x, y], "confidence": {"type": "double", "minimum": 0, "maximum": 1}},
}
}
Example
- Create a pose estimation processing block object using the
FacerecService.createProcessingBlock
method, passing a Context container with set parameters as an argument.
- C++
- Python
- Flutter
- C#
- Java
auto configCtx = service->createContext();
configCtx["unit_type"] = "HUMAN_POSE_ESTIMATOR";
pbio::ProcessingBlock humanPoseEstimator = service->createProcessingBlock(configCtx);
configCtx = {"unit_type": "HUMAN_POSE_ESTIMATOR"}
humanPoseEstimator = service.create_processing_block(configCtx)
ProcessingBlock humanPoseEstimator = service.createProcessingBlock({"unit_type": "HUMAN_POSE_ESTIMATOR"});
Dictionary<object, object> configCtx = new();
configCtx["unit_type"] = "HUMAN_POSE_ESTIMATOR";
ProcessingBlock humanPoseEstimator = service.CreateProcessingBlock(configCtx);
Context configCtx = service.createContext();
configCtx.get("unit_type").setString("HUMAN_POSE_ESTIMATOR");
ProcessingBlock humanPoseEstimator = service.createProcessingBlock(detectorConfigCtx);
Perform human detection with BodyDetector or ObjectDetector as described in Start Detection.
Pass the resulting Context container to the
humanPoseEstimator()
method:
- C++
- Python
- Flutter
- C#
- Java
humanPoseEstimator(ioData);
humanPoseEstimator(ioData)
humanPoseEstimator.process(ioData);
humanPoseEstimator.Invoke(ioData);
humanPoseEstimator.process(ioData);