Liveness estimation
2D RGB Real Person Face Estimation
Modifications of Liveness estimation block
2d_additional_check
— estimation of a face belonging to a real person by image with additional checks.2d
— estimation of face belonging to a real person by image (Previous modification of "v4").
Modification | Version | Detection time CPU (ms)* | BPCER | APCER |
---|---|---|---|---|
2d_additional_check | 1 | 41 | 0.19 | 0.27 |
Configuration
- 2d_additional_check
- 2d
"capturer_config_name"
is face detector configuration file. The file used by default is"common_capturer_uld_fda.xml"
(Capturer object configuration files)."config_name"
is the Liveness estimator configuration file. The file used by default is"liveness_2d_estimator_v3.xml"
(Liveness2DEstimator class).
"confidence_threshold"
is the threshold by which thevalue
parameter ("REAL" or "FAKE") is defined. The default value is 0.8.
Processing Block configurable parameters
Processing Block specification
- 2d_additional_check
- 2d
- Input Context must contain an image in binary format.
{
"image" : {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [height, width, channels]
}
}
- Input Context must contain an image in binary format and
objects
array after Face Detector and Face Fitter:
{
"image" : {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [height, width, channels]
},
"objects": [{
"id": {"type": "long", "minimum": 0},
"class": "face",
"confidence": {"double", "minimum": 0, "maximum": 1},
"bbox": [x1, y2, x2, y2]
"keypoints": {
"left_eye_brow_left": {"proj" : [x, y]},
"left_eye_brow_up": {"proj" : [x, y]},
"left_eye_brow_right": {"proj" : [x, y]},
"right_eye_brow_left": {"proj" : [x, y]},
"right_eye_brow_up": {"proj" : [x, y]},
"right_eye_brow_right": {"proj" : [x, y]},
"left_eye_left": {"proj" : [x, y]},
"left_eye": {"proj" : [x, y]},
"left_eye_right": {"proj" : [x, y]},
"right_eye_left": {"proj" : [x, y]},
"right_eye": {"proj" : [x, y]},
"right_eye_right": {"proj" : [x, y]},
"left_ear_bottom": {"proj" : [x, y]},
"nose_left": {"proj" : [x, y]},
"nose": {"proj" : [x, y]},
"nose_right": {"proj" : [x, y]},
"right_ear_bottom": {"proj" : [x, y]},
"mouth_left": {"proj" : [x, y]},
"mouth": {"proj" : [x, y]},
"mouth_right": {"proj" : [x, y]},
"chin": {"proj" : [x, y]},
"points": ["proj": [x, y]]
}
}]
}
- 2d_additional_check
- 2d
- After calling the Processing Block, an array of objects containing one object is added. The object will contain the coordinates of the bounding box, the detection confidence, the class and the
liveness
field. By the"liveness"
key, a Context object containing 3 elements is available:
"confidence"
key with a value of type double in the range[0,1]"value"
key with a value of type string, which corresponds to one of two states: "REAL" or "FAKE""info"
key with a string value that corresponds to one of the states pbio::Liveness2DEstimator::Liveness
The specification of the output Context:
{
"image" : {},
"objects": [{
"id": {"type": "long", "minimum": 0},
"class": "face",
"confidence": {"double", "minimum": 0, "maximum": 1},
"bbox": [x1, y2, x2, y2],
"liveness": {
"confidence": {"type": "double", "minimum": 0, "maximum": 1},
"info": {
"enum": [
"FACE_NOT_FULLY_FRAMED", "MULTIPLE_FACE_FRAMED",
"FACE_TURNED_RIGHT", "FACE_TURNED_LEFT", "FACE_TURNED_UP",
"FACE_TURNED_DOWN", "BAD_IMAGE_LIGHTING", "BAD_IMAGE_NOISE",
"BAD_IMAGE_BLUR", "BAD_IMAGE_FLARE", "NOT_COMPUTED"
]
},
"value": {"enum": ["REAL", "FAKE"]}
}
}]
}
- After calling the processing block, the
liveness
field will be added to each object. A Context object containing 2 elements is available for the"liveness"
key:
"confidence"
key with a value of double type in the range[0,1]"value"
key with a value of type string, which corresponds to one of two states: "REAL" or "FAKE" Output Context specifications
{
"objects": [{
"liveness": {
"confidence": {"type": "double", "minimum": 0, "maximum": 1},
"value": {"enum": ["REAL", "FAKE"]}
}
}]
}
Example
To estimate whether a face belongs to a real person in the image, follow the steps below:
- Create a Context configuration container and specify the values
"unit_type"
,"modification"
,"version"
and other parameters of the block you are interested in. An example of creating a processing block can be found on the Working with Processing Block page.
- C++
- Python
- Flutter
auto configCtx = service->createContext();
configCtx["unit_type"] = "LIVENESS_ESTIMATOR";
configCtx["modification"] = "2d";
pbio::ProcessingBlock blockLiveness = service->createProcessingBlock(configCtx);
configCtx = {
"unit_type": "LIVENESS_ESTIMATOR",
"modification": "2d"
}
blockLiveness = service.create_processing_block(configCtx)
ProcessingBlock blockLiveness = service.createProcessingBlock({
"unit_type": "LIVENESS_ESTIMATOR",
"modification": "2d"
});
- Pass the input Context-container corresponding to the block modification to the
"blockLiveness()"
method:
"2d"
is a Context-container received after the processing blocks of face and fitter detection work"2d_additional_check"
is a Context-container containing an image in binary format
- C++
- Python
- Flutter
//------------------
// creating face detection processing blocks, and Context container with binary image
//------------------
faceDetector(ioData)
faceFitter(ioData)
blockLiveness(ioData);
#------------------
# creating face detection processing blocks, and a Context container with a binary image
#------------------
faceDetector(ioData)
faceFitter(ioData)
blockLiveness(ioData)
//------------------
// creating face detection processing blocks, and a Context container with a binary image
//------------------
Context ioData = faceDetector.process(ioData);
Context ioData = faceFitter.process(ioData);
Context ioData = blockLiveness.process(ioData);