Face estimation
Estimation Processing Blocks
AGE_ESTIMATOR
— estimate ageGENDER_ESTIMATOR
— estimate genderEMOTION_ESTIMATOR
— estimate emotionsMASK_ESTIMATOR
— estimate mask precense
Modifications and versions
- AGE_ESTIMATOR
- GENDER_ESTIMATOR
- EMOTION_ESTIMATOR
- MASK_ESTIMATOR
Modification | Version | Face SDK version | Detection time CPU (ms)* | Detection time GPU (ms)** | Accuracy, (average error in years) |
---|---|---|---|---|---|
light | 1 | 3.19 | 1 | 2 | 5.5 |
2 | 1 | 2 | 4.9 | ||
heavy | 1 | 3.19 | 1 | 2 | 4.7 |
2 | 1 | 2 | 3.5 |
** - GPU (NVIDIA GTX 10xx series)
The default modification is "heavy"
.
Modification | Version | Face SDK version | Detection time CPU (ms)* | Detection time GPU (ms)** | Accuracy, % |
---|---|---|---|---|---|
light | 1 | 3.19 | 1 | 2 | 95 |
2 | 1 | 2 | 96 | ||
heavy | 1 | 3.19 | 1 | 2 | 96 |
2 | 1 | 2 | 97 | ||
3 | 3.20 | 1 | 2 | 97.5 |
** - GPU (NVIDIA GTX 10xx series)
The default modification is "heavy"
.
Modification | Version | Face SDK version | Detection time CPU (ms)* | Detection time GPU (ms)** | Accuracy, % |
---|---|---|---|---|---|
heavy | 1 | 3.19 | 28 | 4 | 80 |
** - GPU (NVIDIA GTX 10xx series)
The default modification is "heavy"
.
Modification | Version | Face SDK version | Detection time CPU (ms)* | Detection time GPU (ms)** |
---|---|---|---|---|
light | 1 | 3.19 | 1 | 2 |
2 | 1 | 2 |
** - GPU (NVIDIA GTX 10xx series)
The default modification is "light"
.
Processing Block specification
The input Context must contain an image in binary format and objects array from Face Detector and Face Fitter:
Click here to expand the input Context specification
{
"image" : {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [height, width, channels]
},
"objects": [{
"id": {"type": "long", "minimum": 0},
"class": "face",
"confidence": {"double", "minimum": 0, "maximum": 1},
"bbox": [x1, y2, x2, y2]
"keypoints": {
"left_eye_brow_left": {"proj" : [x, y]},
"left_eye_brow_up": {"proj" : [x, y]},
"left_eye_brow_right": {"proj" : [x, y]},
"right_eye_brow_left": {"proj" : [x, y]},
"right_eye_brow_up": {"proj" : [x, y]},
"right_eye_brow_right": {"proj" : [x, y]},
"left_eye_left": {"proj" : [x, y]},
"left_eye": {"proj" : [x, y]},
"left_eye_right": {"proj" : [x, y]},
"right_eye_left": {"proj" : [x, y]},
"right_eye": {"proj" : [x, y]},
"right_eye_right": {"proj" : [x, y]},
"left_ear_bottom": {"proj" : [x, y]},
"nose_left": {"proj" : [x, y]},
"nose": {"proj" : [x, y]},
"nose_right": {"proj" : [x, y]},
"right_ear_bottom": {"proj" : [x, y]},
"mouth_left": {"proj" : [x, y]},
"mouth": {"proj" : [x, y]},
"mouth_right": {"proj" : [x, y]},
"chin": {"proj" : [x, y]},
"points": ["proj": [x, y]]
}
}]
}Example of Face Detector and Face Fitter is in Example of face detection and landmarks estimation.
After calling the estimation Processing Block, each object from the
"objects"
array will be added attributes corresponding to this block.Specification of the output Context:
- AGE_ESTIMATOR
- GENDER_ESTIMATOR
- EMOTION_ESTIMATOR
- MASK_ESTIMATOR
[{
"age": {"type": "long", "minimum": 0}
}][{
"gender": {
"enum": ["FEMALE", "MALE"]
}
}][{
"emotions" : [
"confidence": {"type": "double", "minimum": 0, "maximum": 1},
"emotion": {
"enum": ["ANGRY", "DISGUSTED", "SCARED", "HAPPY", "NEUTRAL", "SAD", "SURPRISED"]
}
]
}][{
"has_medical_mask": {
"confidence": {"double", "minimum": 0, "maximum": 1}, // confidence in the presence/absencce of the mask on the face
"value": {"type": "boolean"} // true - mask precense, false - mask absencce. The value of the "value" parameter is determined by the value of the `confidence_threshold` key
}
}]
Example of working with the estimation processing block
To evaluate facial attributes in an image, perform the following steps:
Create a Context configuration container and specify the values
"unit_type"
,"modification"
,"version"
, of the block you are interested in. An example of creating a processing block can be found on the here.Pass the container-context obtained after the processing blocks of face detection and fitter's detection work
Call the evaluation processing block
- C++
- Python
- Flutter
- C#
auto configCtx = service->createContext();
configCtx["unit_type"] = "EMOTION_ESTIMATOR";
pbio::ProcessingBlock blockEstimator = service->createProcessingBlock(configCtx);
//------------------
// creation of Face Detection Processing Blocks, and Context container with binary image
//------------------
faceDetector(ioData)
faceFitter(ioData)
blockEstimator(ioData);configCtx = {"unit_type": "EMOTION_ESTIMATOR"}
blockEstimator = service.create_processing_block(configCtx)
#------------------
# creation of Face Detection Processing Blocks, and Context container with binary image
#------------------
faceDetector(ioData)
faceFitter(ioData)
blockEstimator(ioData)ProcessingBlock blockEstimator = service.createProcessingBlock({"unit_type": "EMOTION_ESTIMATOR"});
//------------------
// creation of Face Detection Processing Blocks, and Context container with binary image
//------------------
Context ioData = faceDetector.process(ioData);
Context ioData = faceFitter.process(ioData);
Context ioData = blockEstimator.process(ioData);Dictionary<object, object> configCtx = new();
configCtx["unit_type"] = "EMOTION_ESTIMATOR";
ProcessingBlock blockEstimator = service.CreateProcessingBlock(configCtx);
//------------------
// creation of Face Detection Processing Blocks, and Context container with binary image
//------------------
faceDetector.Invoke(ioData);
faceFitter.Invoke(ioData);
blockEstimator.Invoke(ioData);