Skip to main content
Version: 3.22.0 (latest)

Face estimation

Estimation Processing Blocks

  • AGE_ESTIMATOR — estimate age
  • GENDER_ESTIMATOR — estimate gender
  • EMOTION_ESTIMATOR — estimate emotions
  • MASK_ESTIMATOR — estimate mask precense

Modifications and versions

Modification Version Detection time (ms) Accuracy, (average error in years)
light1 2 5.5
2 2 4.9
heavy1 6 4.7
2 6 3.5
note

The default modification is "heavy".

Processing Block specification

  1. The input Context must contain an image in binary format and objects array from Face Detector and Face Fitter:

    Click here to expand the input Context specification
    {
    "image" : {
    "format": "NDARRAY",
    "blob": "data pointer",
    "dtype": "uint8_t",
    "shape": [height, width, channels]
    },
    "objects": [{
    "id": {"type": "long", "minimum": 0},
    "class": "face",
    "confidence": {"double", "minimum": 0, "maximum": 1},
    "bbox": [x1, y2, x2, y2]
    "keypoints": {
    "left_eye_brow_left": {"proj" : [x, y]},
    "left_eye_brow_up": {"proj" : [x, y]},
    "left_eye_brow_right": {"proj" : [x, y]},
    "right_eye_brow_left": {"proj" : [x, y]},
    "right_eye_brow_up": {"proj" : [x, y]},
    "right_eye_brow_right": {"proj" : [x, y]},
    "left_eye_left": {"proj" : [x, y]},
    "left_eye": {"proj" : [x, y]},
    "left_eye_right": {"proj" : [x, y]},
    "right_eye_left": {"proj" : [x, y]},
    "right_eye": {"proj" : [x, y]},
    "right_eye_right": {"proj" : [x, y]},
    "left_ear_bottom": {"proj" : [x, y]},
    "nose_left": {"proj" : [x, y]},
    "nose": {"proj" : [x, y]},
    "nose_right": {"proj" : [x, y]},
    "right_ear_bottom": {"proj" : [x, y]},
    "mouth_left": {"proj" : [x, y]},
    "mouth": {"proj" : [x, y]},
    "mouth_right": {"proj" : [x, y]},
    "chin": {"proj" : [x, y]},
    "points": ["proj": [x, y]]
    }
    }]
    }

    Example of Face Detector and Face Fitter is in Example of face detection and landmarks estimation.

  2. After calling the estimation Processing Block, each object from the "objects" array will be added attributes corresponding to this block.

    Specification of the output Context:

    [{
    "age": {"type": "long", "minimum": 0}
    }]

Example of working with the estimation processing block

To evaluate facial attributes in an image, perform the following steps:

  1. Create a Context configuration container and specify the values "unit_type", "modification", "version", of the block you are interested in. An example of creating a processing block can be found on the here.

    Processing Block configurable parameters

  2. Pass the container-context obtained after the processing blocks of face detection and fitter's detection work

  3. Call the evaluation processing block

    auto configCtx = service->createContext();
    configCtx["unit_type"] = "EMOTION_ESTIMATOR";
    pbio::ProcessingBlock blockEstimator = service->createProcessingBlock(configCtx);

    //------------------
    // creation of Face Detection Processing Blocks, and Context container with binary image
    //------------------

    faceDetector(ioData)
    faceFitter(ioData)
    blockEstimator(ioData);