Liveness Estimation
In this section you'll learn how to integrate Liveness Estimator to your C++ or Python project.
2D RGB Liveness Estimation (C++/Python)
Version 3
1. Create a Liveness Estimator
1.1 To create a Liveness Estimator, follow steps 1-3 described in Creating a Processing Block and specify the values:
"LIVENESS_ESTIMATOR"
for the"unit_type"
key- An empty string
""
for the"model_path"
key
- C++
- Python
configCtx["unit_type"] = "LIVENESS_ESTIMATOR"
configCtx["model_path"] = "";
// optional, default values are specified after "="
// paths specified for examples located in <sdk_dir>/bin
configCtx["sdk_path"] = "..";
configCtx["capturer_config_name"] = "common_capturer_uld_fda.xml";
configCtx["config_name"] = "liveness_2d_estimator_v3.xml";
configCtx["facerec_conf_dir"] = sdk_dir + "/conf/facerec";
configCtx["dll_path"] = "facerec.dll"; // for Windows
// or
configCtx["dll_path"] = sdk_dir + "/lib/libfacerec.so"; // for Linux
configCtx = {
"unit_type": "LIVENESS_ESTIMATOR",
"model_path": "",
# optional, default values are specified after ":"
# paths specified for examples located in <sdk_dir>/bin
"sdk_path": "..",
"capturer_config_name": "common_capturer_uld_fda.xml",
"config_name": "liveness_2d_estimator_v3.xml",
"facerec_conf_dir": sdk_path + "/conf/facerec/",
"dll_path": "facerec.dll", # for Windows
# or
"dll_path": sdk_dir + "/lib/libfacerec.so" # for Linux
}
Lists of existing configuration files can be found in the sections:
- Capturer Configuration Files for the
"capturer_config_name"
key; - Liveness2DEstimator for the
"config_name"
key.
1.2 Create a Liveness Estimator Processing Block:
- C++
- Python
- Flutter
pbio::ProcessingBlock livenessEstimator = service->createProcessingBlock(configCtx);
livenessEstimator = service.create_processing_block(configCtx)
ProcessingBlock livenessEstimator = service.createProcessingBlock({
"unit_type": "LIVENESS_ESTIMATOR",
"model_path": "",
// опционально, значения по умолчанию указаны после знака "="
// пути указаны для примеров, расположенных в <sdk_dir>/bin
"sdk_path": "..",
"capturer_config_name": "common_capturer_uld_fda.xml",
"config_name": "liveness_2d_estimator_v3.xml",
"facerec_conf_dir": sdk_path + "/conf/facerec/",
"dll_path": "facerec.dll", # для Windows
# или
"dll_path": sdk_dir + "/lib/libfacerec.so" # для Linux
});
2. Liveness Estimation
2.1 Create a Context container ioData
for input-output data using the createContext()
method:
- C++
- Python
- Flutter
auto ioData = service->createContext();
ioData = service.create_context({})
Context ioData = service.createContext({
"objects": []
});
2.2 Create a Context container imgCtx
with RGB-image following the steps described on
Creating a Context container with RGB-image.
# copy an image into the binary format
input_rawimg = image.tobytes()
# put an image into the container
imageCtx = {
"blob": input_rawimg,
"dtype": "uint8_t",
"format": "NDARRAY",
"shape": [dim for dim in img.shape]
}
2.3 Put input image to the input-output data container:
- C++
- Python
- Flutter
ioData["image"] = imgCtx;
ioData = {"image": imgCtx}
ioData = service.createContext({
"objects": [],
"image": {
"blob": <массив байт с изображением в RGB формате>,
"dtype": "uint8_t",
"format": "NDARRAY",
"shape": [<высота изображения в пикселях>, <ширина изображения в пикселях>, 3]
}
});
2.4 Call livenessEstimator
and pass Context-container ioData
that contains an image:
- C++
- Python
- Flutter
livenessEstimator(ioData);
livenessEstimator(ioData)
Context result = livenessEstimator.process(ioData);
Accurate estimation requires only one person's face in the frame, looking at the camera, otherwise the status "MULTIPLE_FACE_FRAMED" will be returned.
If multiple faces are captured, only one of them (order is not guaranteed) will be processed.
The result of calling livenessEstimator()
will be appended to ioData
container.
The format of the output data is presented as a list of objects with the "objects"
key.
Each object in the list has the "class"
key with the "face"
value.
The "liveness"
key contains a Context with 3 elements:
"confidence"
key contains a number of type double in the range of [0,1]"info"
key contains a value of type string that matches one of the pbio::Liveness2DEstimator::Liveness state. It doesn't exist if "value" is "REAL""value"
key contains a value of type string that matches one of two states: "REAL" or "FAKE"
/*
{
"objects": [{ "bbox": [x1, y2, x2, y2],
"class": "face",
"id": {"type": "long", "minimum": 0},
"liveness": {
"confidence": {"type": "double", "minimum": 0, "maximum": 1},
"info": {
"enum": [
"FACE_NOT_FULLY_FRAMED", "MULTIPLE_FACE_FRAMED",
"FACE_TURNED_RIGHT", "FACE_TURNED_LEFT", "FACE_TURNED_UP",
"FACE_TURNED_DOWN", "BAD_IMAGE_LIGHTING", "BAD_IMAGE_NOISE",
"BAD_IMAGE_BLUR", "BAD_IMAGE_FLARE", "NOT_COMPUTED"
]
},
"value": {
"enum": ["REAL", "FAKE"]
}
}
}]
}
*/
Version 4
1. Create a Liveness Estimator
1.1 To create a Liveness Estimator, follow steps 1-3 described in Creating a Processing Block and specify the values:
"LIVENESS_ESTIMATOR"
for the"unit_type"
key- An empty string
""
for the"model_path"
key
- C++
- Python
configCtx["unit_type"] = "LIVENESS_ESTIMATOR";
configCtx["modification"] = "v4";
// optional, default values are specified after "="
// paths specified for examples located in <sdk_dir>/bin
configCtx["sdk_path"] = "..";
configCtx["facerec_conf_dir"] = sdk_dir + "/conf/facerec";
configCtx["dll_path"] = "facerec.dll"; // for Windows
// or
configCtx["dll_path"] = sdk_dir + "/lib/libfacerec.so"; // for Linux
configCtx = {
"unit_type": "LIVENESS_ESTIMATOR",
"modification": "v4",
# optional, default values are specified after ":"
# paths specified for examples located in <sdk_dir>/bin
"sdk_path": "..",
"facerec_conf_dir": sdk_path + "/conf/facerec/",
"dll_path": "facerec.dll", # for Windows
# or
"dll_path": sdk_dir + "/lib/libfacerec.so" # for Linux
}
1.2 Create a Liveness Estimator Processing Block:
- C++
- Python
- Flutter
pbio::ProcessingBlock livenessEstimator = service->createProcessingBlock(configCtx);
livenessEstimator = service.create_processing_block(configCtx)
ProcessingBlock livenessEstimator = service.createProcessingBlock({
"unit_type": "LIVENESS_ESTIMATOR",
"modification": "v4"
});
2. Liveness Estimation
2.1 Make sure that all the necessary processing blocks have been created, namely Uld detector, tddfa fitter. And that these blocks are called. For more information about creating a detector, go to the Face, Body and Object Detection page. For more information about creating a processing fitter block, go to the Face Fitter page.
2.2 Call livenessEstimator
and pass Context-container ioData
that contains an image:
- C++
- Python
- Flutter
livenessEstimator(ioData);
livenessEstimator(ioData)
Context result = livenessEstimator(ioData);
If multiple faces are captured, only one of them (order is not guaranteed) will be processed.
The result of calling livenessEstimator()
will be appended to ioData
container.
The format of the output data is presented as a list of objects with the "objects"
key.
Each object in the list has the "class"
key with the "face"
value.
The "liveness"
key contains a Context with 2 elements:
"confidence"
key contains a number of type double in the range of [0,1]"value"
key contains a value of type string that matches one of two states: "REAL" or "FAKE"
/*
{
"objects": [{ "liveness": {
"confidence": {"type": "double", "minimum": 0, "maximum": 1},
"value": {
"enum": ["REAL", "FAKE"]
}
}
}]
}
*/