Face estimation
Estimation Processing Blocks
GLASSES_ESTIMATOR
estimates glasses presence on a faceAGE_ESTIMATOR
estimates ageGENDER_ESTIMATOR
estimates genderEMOTION_ESTIMATOR
estimates emotionsMASK_ESTIMATOR
estimates mask precenseEYE_OPENNESS_ESTIMATOR
estimates eye openness
Modifications and versions
- AGE_ESTIMATOR
- GENDER_ESTIMATOR
- EMOTION_ESTIMATOR
- MASK_ESTIMATOR
- EYE_OPENNESS_ESTIMATOR
- GLASSES_ESTIMATOR
Modification | Version | Face SDK version | Detection time CPU (ms)* | Detection time GPU (ms)** | Accuracy, (average error in years) |
---|---|---|---|---|---|
light | 1 | 3.19 | 1 | 2 | 5.5 |
2 | 1 | 2 | 4.9 | ||
heavy | 1 | 3.19 | 1 | 2 | 4.7 |
2 | 1 | 2 | 3.5 |
** - GPU (NVIDIA GTX 10xx series)
The default modification is "heavy"
.
Modification | Version | Face SDK version | Detection time CPU (ms)* | Detection time GPU (ms)** | Accuracy, % |
---|---|---|---|---|---|
light | 1 | 3.19 | 1 | 2 | 95 |
2 | 1 | 2 | 96 | ||
heavy | 1 | 3.19 | 1 | 2 | 96 |
2 | 1 | 2 | 97 | ||
3 | 3.20 | 1 | 2 | 97.5 |
** - GPU (NVIDIA GTX 10xx series)
The default modification is "heavy"
.
Modification | Version | Face SDK version | Detection time CPU (ms)* | Detection time GPU (ms)** | Accuracy, % |
---|---|---|---|---|---|
heavy | 1 | 3.19 | 28 | 4 | 80 |
** - GPU (NVIDIA GTX 10xx series)
The default modification is "heavy"
.
Modification | Version | Face SDK version | Detection time CPU (ms)* | Detection time GPU (ms)** |
---|---|---|---|---|
light | 1 | 3.19 | 1 | 2 |
2 | 1 | 2 |
** - GPU (NVIDIA GTX 10xx series)
The default modification is "light"
.
Modification | Version | Face SDK version | Detection time CPU (ms)* | Detection time GPU (ms)** |
---|---|---|---|---|
light | 1 | 3.21 | 3 | 1 |
** - GPU (NVIDIA GTX 10xx series)
The default modification is "light"
.
Modification | Version | Face SDK version | Detection time CPU (ms)* | Detection time GPU (ms)** |
---|---|---|---|---|
anyglasses | 1 | 3.24 | 3 | 1 |
sunglasses | 1 | 3.24 | 3 | 1 |
** - GPU (NVIDIA GTX 10xx series)
The default modification is anyglasses
.
Processing Block specification
The input Context must contain an image in binary format and objects array from Face Detector and Face Fitter:
Click here to expand the input Context specification
{
"image" : {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [height, width, channels]
},
"objects": [{
"id": {"type": "long", "minimum": 0},
"class": "face",
"confidence": {"double", "minimum": 0, "maximum": 1},
"bbox": [x1, y2, x2, y2]
"keypoints": {
"left_eye_brow_left": {"proj" : [x, y]},
"left_eye_brow_up": {"proj" : [x, y]},
"left_eye_brow_right": {"proj" : [x, y]},
"right_eye_brow_left": {"proj" : [x, y]},
"right_eye_brow_up": {"proj" : [x, y]},
"right_eye_brow_right": {"proj" : [x, y]},
"left_eye_left": {"proj" : [x, y]},
"left_eye": {"proj" : [x, y]},
"left_eye_right": {"proj" : [x, y]},
"right_eye_left": {"proj" : [x, y]},
"right_eye": {"proj" : [x, y]},
"right_eye_right": {"proj" : [x, y]},
"left_ear_bottom": {"proj" : [x, y]},
"nose_left": {"proj" : [x, y]},
"nose": {"proj" : [x, y]},
"nose_right": {"proj" : [x, y]},
"right_ear_bottom": {"proj" : [x, y]},
"mouth_left": {"proj" : [x, y]},
"mouth": {"proj" : [x, y]},
"mouth_right": {"proj" : [x, y]},
"chin": {"proj" : [x, y]},
"points": ["proj": [x, y]]
}
}]
}Example of Face Detector and Face Fitter is in Example of face detection and landmarks estimation].
After calling the estimation Processing Block, each object from the
"objects"
array will be added attributes corresponding to this block.Specification of the output Context:
- AGE_ESTIMATOR
- GENDER_ESTIMATOR
- EMOTION_ESTIMATOR
- MASK_ESTIMATOR
- EYE_OPENNESS_ESTIMATOR
- GLASSES_ESTIMATOR
[{
"age": {"type": "long", "minimum": 0}
}][{
"gender": {
"enum": ["FEMALE", "MALE"]
}
}][{
"emotions" : [
"confidence": {"type": "double", "minimum": 0, "maximum": 1},
"emotion": {
"enum": ["ANGRY", "DISGUSTED", "SCARED", "HAPPY", "NEUTRAL", "SAD", "SURPRISED"]
}
]
}][{
"has_medical_mask": {
"confidence": {"double", "minimum": 0, "maximum": 1}, // confidence in the presence/absencce of the mask on the face
"value": {"type": "boolean"} // true - mask precense, false - mask absencce. The value of the "value" parameter is determined by the value of the `confidence_threshold` key
}
}][{
"is_left_eye_open": {
"confidence": {"double", "minimum": 0, "maximum": 1}, // numerical value of confidence in the openness of the left eye
"value": {"type": "boolean"} // true - the eye is open, false - the eye is closed. The value of the "value" parameter is determined by the value of the confidence_threshold key
},
"is_right_eye_open": {
"confidence": {"double", "minimum": 0, "maximum": 1}, // numerical value of confidence in the openness of the right eye
"value": {"type": "boolean"} // true - the eye is open, false - the eye is closed. The value of the "value" parameter is determined by the value of the confidence_threshold key
}
}][{
"glasses": {
"confidence": {"double", "minimum": 0, "maximum": 1}, // numerical value of confidence in presence/absence of glasses
"value": {"type": "boolean"} // true - a person with glasses, false - a person without glasses. The "value" parameter is determined by the value of the `confidence_threshold` key.
}
}]
Example of working with the estimation processing block
To evaluate facial attributes in an image, perform the following steps:
Create a Context configuration container and specify the values
"unit_type"
,"modification"
,"version"
, of the block you are interested in. An example of creating a processing block can be found on the here].Pass the container-context obtained after the processing blocks of face detection and fitter's detection work
Call the evaluation processing block
Get the result of the processing block
- C++
- Python
- Flutter
- C#
- Java
- Kotlin
auto configCtx = service->createContext();
configCtx["unit_type"] = "AGE_ESTIMATOR";
pbio::ProcessingBlock blockEstimator = service->createProcessingBlock(configCtx);
//------------------
// creation of Face Detection Processing Blocks, and Context container with binary image
//------------------
faceDetector(ioData);
faceFitter(ioData);
blockEstimator(ioData);
long age = ioData["objects"][0]["age"].getLong();configCtx = {"unit_type": "AGE_ESTIMATOR"}
blockEstimator = service.create_processing_block(configCtx)
#------------------
# creation of Face Detection Processing Blocks, and Context container with binary image
#------------------
faceDetector(ioData)
faceFitter(ioData)
blockEstimator(ioData)
age = ioData["objects"][0]["age"].get_value()ProcessingBlock blockEstimator = service.createProcessingBlock({"unit_type": "EMOTION_ESTIMATOR"});
//------------------
// creation of Face Detection Processing Blocks, and Context container with binary image
//------------------
faceDetector.process(ioData);
faceFitter.process(ioData);
blockEstimator.process(ioData);
int age = ioData["objects"][0]["age"].get_value();Dictionary<object, object> configCtx = new();
configCtx["unit_type"] = "AGE_ESTIMATOR";
ProcessingBlock blockEstimator = service.CreateProcessingBlock(configCtx);
//------------------
// creation of Face Detection Processing Blocks, and Context container with binary image
//------------------
faceDetector.Invoke(ioData);
faceFitter.Invoke(ioData);
blockEstimator.Invoke(ioData);
long age = ioData["objects"][0]["age"].GetLong();Context configCtx = service.createContext();
configCtx.get("unit_type").setString("AGE_ESTIMATOR");
ProcessingBlock blockEstimator = service.createProcessingBlock(configCtx);
//------------------
// creation of Face Detection Processing Blocks, and Context container with binary image
//------------------
faceDetector.process(ioData);
faceFitter.process(ioData);
blockEstimator.process(ioData);
long age = ioData.get("objects").get(0).get("age").getLong();val configCtx = service.createContext()
configCtx["unit_type"].string = "AGE_ESTIMATOR"
val blockEstimator = service.createProcessingBlock(configCtx)
//------------------
// creation of Face Detection Processing Blocks, and Context container with binary image
//------------------
faceDetector.process(ioData)
faceFitter.process(ioData)
blockEstimator.process(ioData)
val age = ioData["objects"][0]["age"].long