Face, Body and Object Detection
In this section you will learn how to integrate Face Detector, Human Body Detector and Object Detector to your C++ or Python project.
Face Detection (С++/Python)
Types of Detection Blocks for the Key "unit_type"
- FACE_DETECTOR
- HUMAN_BODY_DETECTOR
- OBJECT_DETECTOR
1. Create a Face Detector
1.1 To create a Face Detector, follow steps 1-3 described in Creating a Processing Block
and specify the value of the block you are interested in for the "unit_type"
key :
- C++
- Python
- Flutter
configCtx["unit_type"] = "FACE_DETECTOR";
// optional, default value is "ssyv"
configCtx["modification"] = "ssyv";
// optional, default value is 1l
configCtx["version"] = 1l;
// optional, default value is 0.5
configCtx["confidence_threshold"] = 0.5;
// optional, default value is 0.5
configCtx["iou_threshold"] = 0.5;
configCtx = {
"unit_type": "FACE_DETECTOR",
# optional, default values are specified after ":"
"modification": "ssyv",
"version": 1,
"confidence_threshold": 0.5,
"iou_threshold": 0.5
}
Map<String, dynamic> configCtx = {
"unit_type": "FACE_DETECTOR",
// optional, default values are specified after ":"
"confidence_threshold": 0.5,
"iou_threshold": 0.5
};
1.2 Create a Face Detector processing block:
- C++
- Python
- Flutter
pbio::ProcessingBlock faceDetector = service->createProcessingBlock(configCtx);
faceDetector = service.create_processing_block(configCtx)
ProcessingBlock blockDetector = service.createProcessingBlock(configCtx);
2. Face Detection
2.1 Create a Context container ioData
for input-output data using the createContext()
method:
- C++
- Python
- Flutter
auto ioData = service->createContext();
ioData = service.create_context({})
Context ioData = service.createContext({});
2.2 Create a Context container imgCtx
with RGB-image following the steps described on
Creating a Context container with RGB-image.
- C++
- Python
- Flutter
// putting an image into the container
auto imgCtx = ioData["image"];
pbio::context_utils::putImage(imgCtx, input_rawimg);
# copying an image to binary format
input_rawimg = image.tobytes()
# putting an image into the container
imageCtx = {
"blob": input_rawimg,
"dtype": "uint8_t",
"format": "NDARRAY",
"shape": [dim for dim in image.shape]
}
Map<String, dynamic> imageContext = {
"blob": <byte array with an image in RGB format>,
"dtype": "uint8_t",
"format": "NDARRAY",
"shape": [<image height in pixels>, <image width in pixels>, 3]
};
2.3 Put an image to the input-output data container:
- C++
- Python
- Flutter
ioData["image"] = imgCtx;
ioData["image"] = imageCtx
ioData["image"].placeValues(imageContext);
2.4 Call the faceDetector
and pass the Context-container ioData
that contains an image:
- C++
- Python
- Flutter
blockDetector(ioData);
blockDetector(ioData)
Context ioData = blockDetector.process(ioData);
The blockDetector()
method will add the result of processing samples (images) to the ioData
container.
The output format is a list of objects, accessible by the "objects"
key.
Each list object has a "class"
key with a value corresponding to the class of the detected object.
The "bbox"
(bounding box) key contains an array of 4 double numbers {x1, y1, x2, y2}
which are the relative coordinates of the source image.
The "confidence"
key contains a double number in the range [0,1].
/*
{
"objects": [{ "id": {"type": "long", "minimum": 0},
"class": "face",
"confidence": {"double", "minimum": 0, "maximum": 1},
"bbox": [x1, y2, x2, y2] }]
}
*/