Face Estimation
In this section you will learn how to integrate Emotion, Age, Gender and Mask estimators to your C++ or Python project.
Emotion Estimation (C++/Python)
Requirements
- Windows x86 64-bit or Linux x86 64-bit system.
- Installed Face SDK package windows_x86_64 or linux_x86_64 (see Getting Started).
1. Creating an Emotion Estimator
1.1 To create an Emotion Estimator, follow steps 1-3 described in Creating a Processing Block and specify the values:
"EMOTION_ESTIMATOR"
for the"unit_type"
key;- path to Emotion Estimator model file for the
"model_path"
key.
- C++
- Python
configCtx["unit_type"] = "EMOTION_ESTIMATOR";
// default path to Emotion Estimator model file - "share/faceanalysis/emotion.enc" in the Face SDK's root directory
configCtx["model_path"] = "share/faceanalysis/emotion.enc";
configCtx = {
"unit_type": "EMOTION_ESTIMATOR",
# the path is relative to the Face SDK root directory
"model_path": "share/faceanalysis/emotion.enc"
}
1.2 Create an Emotion Estimator Processing block:
- C++
- Python
pbio::ProcessingBlock emotionEstimator = service->createProcessingBlock(configCtx);
emotionEstimator = service.create_processing_block(configCtx)
2. Emotion Estimation
2.1 Create a Context container ioData
for input-output data using the createContext()
method:
auto ioData = service->createContext();
2.2 Create a Context container imgCtx
with RGB-image following the steps described on
Creating a Context container with RGB-image.
# copy an image into the binary format
input_rawimg = image.tobytes()
# put an image into the container
imageCtx = {
"blob": input_rawimg,
"dtype": "uint8_t",
"format": "NDARRAY",
"shape": [dim for dim in img.shape]
}
2.3 Put input image to the input-output data container:
- C++
- Python
ioData["image"] = imgCtx;
ioData = {"image": imgCtx}
2.4 Crop a face from the image. To do this, run Face Detector, save the results to faceData
container, crop a face by bbox
coordinates and put a cropped image to ioData
container:
// image cropping
const auto& rectCtx = obj.at("bbox");
int x = std::max(static_cast<int>(rectCtx[0].getDouble()*image.size[1]), 0);
int y = std::max(static_cast<int>(rectCtx[1].getDouble()*image.size[0]), 0);
int width = std::min(static_cast<int>(rectCtx[2].getDouble()*image.size[1]), image.size[1]) - x;
int height = std::min(static_cast<int>(rectCtx[3].getDouble()*image.size[0]), image.size[0]) - y;
pbio::RawSample::Rectangle rect(x, y, width, height);
pbio::RawImage raw_image_crop = input_rawimg.crop(rect);
// image saving
auto imgCtx = ioData["image"]; // a shallow copy (reference), "auto" is shown in pbio::Context::Ref
// to create a deep copy, determine pbio::Context imgCtx = ioData["image"];
pbio::context_utils::putImage(imgCtx, raw_image_crop);
2.5 Call emotionEstimator()
and pass a Context-container ioData
, that contains a cropped image:
- C++
- Python
emotionsEstimator(ioData);
emotionsEstimator(ioData)
The result of calling emotionEstimator()
will be appended to ioData
container.
The format of the output data is presented as a list of objects with the "objects"
key.
This list object has the "class"
key with the "face"
value.
/*
{
"objects": [{ "id": {"type": "long", "minimum": 0},
"class": "face",
"emotions" : [
"confidence": {"type": "double", "minimum": 0, "maximum": 1},
"emotion": {
"enum": ["ANGRY", "DISGUSTED", "SCARED", "HAPPY", "NEUTRAL", "SAD", "SURPRISED"]
}
]
}]
}
*/
Emotion Estimator usage examples:
3. GPU Acceleration
Emotion Estimator can be used with GPU acceleration (CUDA). For more information, please follow this link.
Age Estimation (C++/Python)
Requirements
- Windows x86 64-bit or Linux x86 64-bit system.
- Installed Face SDK package windows_x86_64 or linux_x86_64 (see Getting Started).
1. Creating an Age Estimator
1.1 To create an Age Estimator, follow steps 1-3 described in Creating a Processing Block and specify the values:
"AGE_ESTIMATOR"
for the"unit_type"
key;- path to Age Estimator model file for the
"model_path"
key.
- C++
- Python
configCtx["unit_type"] = "AGE_ESTIMATOR";
// default path to Age Estimator model file — "share/faceanalysis" in the Face SDK's root directory. Two versions of available — age_heavy and age_light, differing in size, quality and output speed.
configCtx["model_path"] = "share/faceanalysis/age_heavy.enc";
configCtx = {
"unit_type": "AGE_ESTIMATOR",
# the path is relative to the Face SDK root directory
"model_path": "share/faceanalysis/age_heavy.enc"
}
1.2 Create an Age Estimator Processing block:
- C++
- Python
pbio::ProcessingBlock ageEstimator = service->createProcessingBlock(configCtx);
ageEstimator = service.create_processing_block(configCtx)
2. Age Estimation
2.1 Create a Context container ioData
for input-output data using the createContext()
method:
auto ioData = service->createContext();
2.2 Create a Context container imgCtx
with RGB-image following the steps described on
Creating a Context container with RGB-image.
# copy an image into the binary format
input_rawimg = image.tobytes()
# put an image into the container
imageCtx = {
"blob": input_rawimg,
"dtype": "uint8_t",
"format": "NDARRAY",
"shape": [dim for dim in img.shape]
}
2.3 Put input image to the input-output data container:
- C++
- Python
ioData["image"] = imgCtx;
ioData = {"image": imgCtx}
2.4 Crop a face from the image and save the result to ioData
container, see para. 2.4 in Emotion Estimation as an example.
2.5 Call ageEstimator()
and pass ioData
container with a cropped image:
- C++
- Python
ageEstimator(ioData);
ageEstimator(ioData)
The result of calling ageEstimator()
will be appended to ioData
container.
/*
{
"objects": [{ "age": {"type": "long", "minimum": 0},
"class": "face",
"id": {"type": "long", "minimum": 0}
}]
}
*/
Age Estimator usage examples:
3. GPU Acceleration
Age Estimator can be used with GPU acceleration (CUDA). For more information, please follow this link.
Gender Estimation (C++/Python)
Requirements
- Windows x86 64-bit or Linux x86 64-bit system.
- Installed Face SDK package windows_x86_64 or linux_x86_64 (see Getting Started).
1. Creating a Gender Estimator
1.1 To create a Gender Estimator, follow steps 1-3 described in Creating a Processing Block and specify the values:
"GENDER_ESTIMATOR"
for the"unit_type"
key;- path to Gender Estimator model file for the
"model_path"
key.
- C++
- Python
configCtx["unit_type"] = "GENDER_ESTIMATOR";
// default path to Gender Estimator model file - «share/faceanalysis/» in Face SDK's root directory. Two versions are available — gender_heavy and gender_light, differing in size, quality and output speed.
configCtx["model_path"] = "share/faceanalysis/gender_heavy.enc";
configCtx = {
"unit_type": "GENDER_ESTIMATOR",
# the path is relative to the Face SDK root directory
"model_path": "share/faceanalysis/gender_heavy.enc"
}
1.2 Create a Gender Estimator Processing block:
- C++
- Python
pbio::ProcessingBlock genderEstimator = service->createProcessingBlock(configCtx);
genderEstimator = service.create_processing_block(configCtx)
2. Gender Estimation
2.1 Create a Context container ioData
for input-output data using the createContext()
method:
auto ioData = service->createContext();
2.2 Create a Context container imgCtx
with RGB-image following the steps described on
Creating a Context container with RGB-image.
# copy an image into the binary format
input_rawimg = image.tobytes()
# put an image into the container
imageCtx = {
"blob": input_rawimg,
"dtype": "uint8_t",
"format": "NDARRAY",
"shape": [dim for dim in img.shape]
}
2.3 Put input image to the input-output data container:
- C++
- Python
ioData["image"] = imgCtx;
ioData = {"image": imgCtx}
2.4 Crop a face from the image and save the result to ioData
container, see para. 2.4 in Emotion Estimation as an example.
2.5 Call genderEstimator
and pass a Context-container ioData
that contains the original image:
- C++
- Python
genderEstimator(ioData);
genderEstimator(ioData)
The result of calling genderEstimator()
will be appended to ioData
container.
/*
{
"objects": [{ "class": "face",
"gender": {
"enum": ["FEMALE", "MALE"]
}
"id": {"type": "long", "minimum": 0}
}]
}
*/
Gender Estimator usage examples:
3. GPU Acceleration
Gender Estimator can be used with GPU acceleration (CUDA). For more information, please follow this link.
Mask Estimation (C++/Python)
Requirements
- Windows x86 64-bit or Linux x86 64-bit system.
- Installed Face SDK package windows_x86_64 or linux_x86_64 (see Getting Started).
1. Creating a Mask Estimator
1.1 To create a Mask Estimator, follow steps 1-3 described in Creating a Processing Block and specify the values:
"MASK_ESTIMATOR"
for the"unit_type"
key;- path to Age Estimator model file for the
"model_path"
key; - value for
"confidence_threshold"
key. The threshold value determines the verdict if a person is/isn't wearing a mask.
- C++
- Python
configCtx["unit_type"] = "MASK_ESTIMATOR";
// default path to Mask Estimator model file — "share/faceattributes/mask.enc" in the Face SDK's root directory.
configCtx["model_path"] = "share/faceattributes/mask.enc";
// "confidence_threshold" equals to 0.5 by default.
configCtx["confidence_threshold"] = 0.5;
configCtx = {
"unit_type": "MASK_ESTIMATOR",
# the path is relative to the Face SDK root directory
"model_path": "share/faceanalysis/mask.enc",
# optional, default values are specified after ":"
"confidence_threshold" = 0.5
}
1.2 Create a Mask Estimator Processing block:
- C++
- Python
pbio::ProcessingBlock maskEstimator = service->createProcessingBlock(configCtx);
maskEstimator = service.create_processing_block(configCtx)
2. Mask Estimation
2.1 Create a Context container ioData
for input-output data using the createContext()
method:
auto ioData = service->createContext();
2.2 Create a Context container imgCtx
with RGB-image following the steps described on
Creating a Context container with RGB-image.
# copy an image into the binary format
input_rawimg = image.tobytes()
# put an image into the container
imageCtx = {
"blob": input_rawimg,
"dtype": "uint8_t",
"format": "NDARRAY",
"shape": [dim for dim in img.shape]
}
2.3 Put input image to the input-output data container:
- C++
- Python
ioData["image"] = imgCtx;
ioData = {"image": imgCtx}
2.4 Crop a face from the image and save the result to ioData
container, see para. 2.4 in Emotion Estimation as an example.
2.5 Call maskEstimator()
and pass ioData
container with the image:
- C++
- Python
maskEstimator(ioData);
maskEstimator(ioData)
The result of calling maskEstimator()
will be appended to ioData
container.
/*
{
"objects": [{ "class": "face",
"has_medical_mask": {
"confidence": {"double", "minimum": 0, "maximum": 1} // the numerical value of confidence that a person in the image is/isn’t wearing a mask
"value": {"type": "boolean"}, // true - masked person, false - unmasked person. The verdict is based on the value of `confidence_threshold`
},
"id": {"type": "long", "minimum": 0}
}]
}
*/
Mask Estimator usage examples:
3. GPU Acceleration
Mask Estimator can be used with GPU acceleration (CUDA). For more information, please follow this link.