Liveness estimation
2D RGB Real Person Face Estimation
Modifications of Liveness estimation block
2d
— estimation of face belonging to a real person by image (Previous modification of "v4").2d_light
— more lightweight and faster algorithms compared to2d
. Suitable for use on mobile devices.2d_additional_check
— estimation of a face belonging to a real person by image with additional checks.2d_ensemble
— modification for repelling attacks using 2d and 3d masks.2d_ensemble_light
— modification for repelling attacks using 2d and 3d masks with lighter and faster algorithms compared to2d
. Suitable for use on mobile devices.
* - CPU Intel Xeon E5-2683 v4 (single-core)Modification Version Face SDK version Detection time CPU (ms)* Detection time GPU (ms)** APCER[BPCER=0.05] 2d_additional_check 1 3.19 41 - 0.37 2d 1 3.19 694 72 0.40 2 3.21 243 12 0.09 3 3.24 242 12 0.04 2d_light 1 3.21 6 2 0.15 2 3.24 16 4 0.18 3 3.24 16 4 0.05 2d_ensemble 1 3.24 487 24 0.03 2d_ensemble_light 1 3.24 34 9 0.06
** - GPU (NVIDIA GTX 10xx series)
Configuration
- 2d_additional_check
- 2d
- 2d_light
"confidence_threshold"
is the threshold by which thevalue
parameter ("REAL" or "FAKE") is defined.
"confidence_threshold"
is the threshold by which thevalue
parameter ("REAL" or "FAKE") is defined.
"capturer_config_name"
is face detector configuration file. The file used by default is"common_capturer_uld_fda.xml"
(Capturer object configuration files)."config_name"
is the Liveness estimator configuration file. The file used by default is"liveness_2d_estimator_v3.xml"
(Liveness2DEstimator class).
Processing Block configurable parameters
Default values of parameters
Modification | Version | confidence_threshold |
---|---|---|
2d | 1 | 0.8 |
2 | 0.7 | |
3 | 0.45 | |
2d_light | 1 | 0.56 |
2 | 0.42 | |
3 | 0.88 | |
2d_ensemble | 1 | 0.57 |
2d_ensemble_light | 1 | 0.81 |
Processing Block specification
- 2d_additional_check
- 2d
- Input Context must contain an image in binary format.
{
"image" : {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [height, width, channels]
}
}
- Input Context must contain an image in binary format and
objects
array after Face Detector and Face Fitter:
{
"image" : {
"format": "NDARRAY",
"blob": "data pointer",
"dtype": "uint8_t",
"shape": [height, width, channels]
},
"objects": [{
"id": {"type": "long", "minimum": 0},
"class": "face",
"confidence": {"double", "minimum": 0, "maximum": 1},
"bbox": [x1, y2, x2, y2]
"keypoints": {
"left_eye_brow_left": {"proj" : [x, y]},
"left_eye_brow_up": {"proj" : [x, y]},
"left_eye_brow_right": {"proj" : [x, y]},
"right_eye_brow_left": {"proj" : [x, y]},
"right_eye_brow_up": {"proj" : [x, y]},
"right_eye_brow_right": {"proj" : [x, y]},
"left_eye_left": {"proj" : [x, y]},
"left_eye": {"proj" : [x, y]},
"left_eye_right": {"proj" : [x, y]},
"right_eye_left": {"proj" : [x, y]},
"right_eye": {"proj" : [x, y]},
"right_eye_right": {"proj" : [x, y]},
"left_ear_bottom": {"proj" : [x, y]},
"nose_left": {"proj" : [x, y]},
"nose": {"proj" : [x, y]},
"nose_right": {"proj" : [x, y]},
"right_ear_bottom": {"proj" : [x, y]},
"mouth_left": {"proj" : [x, y]},
"mouth": {"proj" : [x, y]},
"mouth_right": {"proj" : [x, y]},
"chin": {"proj" : [x, y]},
"points": ["proj": [x, y]]
}
}]
}
- 2d_additional_check
- 2d
- After calling the Processing Block, an array of objects containing one object is added. The object will contain the coordinates of the bounding box, the detection confidence, the class and the
liveness
field. By the"liveness"
key, a Context object containing 3 elements is available:
"confidence"
key with a value of type double in the range[0,1]"value"
key with a value of type string, which corresponds to one of two states: "REAL" or "FAKE""info"
key with a string value that corresponds to one of the states pbio::Liveness2DEstimator::Liveness
The specification of the output Context:
{
"image" : {},
"objects": [{
"id": {"type": "long", "minimum": 0},
"class": "face",
"confidence": {"double", "minimum": 0, "maximum": 1},
"bbox": [x1, y2, x2, y2],
"liveness": {
"confidence": {"type": "double", "minimum": 0, "maximum": 1},
"info": {
"enum": [
"FACE_NOT_FULLY_FRAMED", "MULTIPLE_FACE_FRAMED",
"FACE_TURNED_RIGHT", "FACE_TURNED_LEFT", "FACE_TURNED_UP",
"FACE_TURNED_DOWN", "BAD_IMAGE_LIGHTING", "BAD_IMAGE_NOISE",
"BAD_IMAGE_BLUR", "BAD_IMAGE_FLARE", "NOT_COMPUTED"
]
},
"value": {"enum": ["REAL", "FAKE"]}
}
}]
}
- After calling the processing block, the
liveness
field will be added to each object. A Context object containing 2 elements is available for the"liveness"
key:
"confidence"
key with a value of double type in the range[0,1]"value"
key with a value of type string, which corresponds to one of two states: "REAL" or "FAKE""attack_type"
key with a value of type string, which corresponds to the likely type of attack (Not available in version 1)"attack_type_scores"
key with a Context object, which contains six elements. The key (type of attack) stores the score (Not available in version 1) Output Context specifications
{
"objects": [{
"liveness": {
"confidence": {"type": "double", "minimum": 0, "maximum": 1},
"value": {"enum": ["REAL", "FAKE"]},
"attack_type": {"type": "string"},
"attack_type_scores": {
"none": {"type": "double", "minimum": 0, "maximum": 1},
"replay": {"type": "double", "minimum": 0, "maximum": 1},
"photo": {"type": "double", "minimum": 0, "maximum": 1},
"regions": {"type": "double", "minimum": 0, "maximum": 1},
"2d-mask": {"type": "double", "minimum": 0, "maximum": 1},
"3d-mask": {"type": "double", "minimum": 0, "maximum": 1},
}
}
}]
}
Example
To estimate whether a face belongs to a real person in the image, follow the steps below:
- Create a Context configuration container and specify the values
"unit_type"
,"modification"
,"version"
and other parameters of the block you are interested in. An example of creating a processing block can be found on the Working with Processing Block page.
- C++
- Python
- Flutter
- C#
- Java
- Kotlin
auto configCtx = service->createContext();
configCtx["unit_type"] = "LIVENESS_ESTIMATOR";
configCtx["modification"] = "2d";
pbio::ProcessingBlock blockLiveness = service->createProcessingBlock(configCtx);
configCtx = {
"unit_type": "LIVENESS_ESTIMATOR",
"modification": "2d"
}
blockLiveness = service.create_processing_block(configCtx)
ProcessingBlock blockLiveness = service.createProcessingBlock({
"unit_type": "LIVENESS_ESTIMATOR",
"modification": "2d"
});
Dictionary<object, object> configCtx = new();
configCtx["unit_type"] = "EMOTION_ESTIMATOR";
configCtx["modification"] = "2d";
ProcessingBlock blockLiveness = service.CreateProcessingBlock(configCtx);
Context configCtx = service.createContext();
configCtx.get("unit_type").setString("LIVENESS_ESTIMATOR");
configCtx.get("modification").setString("2d");
ProcessingBlock blockLiveness = service.createProcessingBlock(configCtx);
val configCtx = service.createContext()
configCtx["unit_type"].string = "EMOTION_ESTIMATOR"
configCtx["modification"].string = "2d"
val blockLiveness = service.createProcessingBlock(configCtx)
- Pass the input Context-container corresponding to the block modification to the
"blockLiveness()"
method:
"2d"
is a Context-container received after the processing blocks of face and fitter detection work"2d_additional_check"
is a Context-container containing an image in binary format
- C++
- Python
- Flutter
- C#
- Java
- Kotlin
//------------------
// creating face detection processing blocks, and Context container with binary image
//------------------
faceDetector(ioData);
faceFitter(ioData);
blockLiveness(ioData);
#------------------
# creating face detection processing blocks, and a Context container with a binary image
#------------------
faceDetector(ioData)
faceFitter(ioData)
blockLiveness(ioData)
//------------------
// creating face detection processing blocks, and a Context container with a binary image
//------------------
faceDetector.process(ioData);
faceFitter.process(ioData);
blockLiveness.process(ioData);
//------------------
// creating face detection processing blocks, and a Context container with a binary image
//------------------
faceDetector.Invoke(ioData);
faceFitter.Invoke(ioData);
blockLiveness.Invoke(ioData);
//------------------
// creating face detection processing blocks, and a Context container with a binary image
//------------------
faceDetector.process(ioData);
faceFitter.process(ioData);
blockLiveness.process(ioData);
//------------------
// creating face detection processing blocks, and a Context container with a binary image
//------------------
faceDetector.process(ioData)
faceFitter.process(ioData)
blockLiveness.process(ioData)
- Get the result of the processing block
LIVENESS_ESTIMATOR
- C++
- Python
- Flutter
- C#
- Java
- Kotlin
auto liveness_results = ioData["objects"][0]["liveness"];
std::string liveness_value = liveness_results["value"].getString();
double liveness_confidence = liveness_results["confidence"].getDouble();
liveness_results = ioData["objects"][0]["liveness"]
liveness_value = liveness_results["value"].get_value()
liveness_confidence = liveness_results["confidence"].get_value()
Context liveness_results = ioData["objects"][0]["liveness"];
String liveness_value = liveness_results["value"].get_value();
double liveness_confidence = liveness_results["confidence"].get_value();
Context liveness_results = ioData["objects"][0]["liveness"];
string liveness_value = liveness_results["value"].GetString();
double liveness_confidence = liveness_results["confidence"].GetDouble();
Context liveness_results = ioData.get("objects").get(0).get("liveness");
String liveness_value = liveness_results.get("value").getString();
double liveness_confidence = liveness_results.get("confidence").getDouble();
val liveness_results = ioData["objects"][0]["liveness"];
val liveness_value = liveness_results["value"].string;
val liveness_confidence = liveness_results["confidence"].double