Skip to main content
Version: 3.18.2

Liveness Estimation

In this section you'll learn how to integrate Liveness Estimator to your C++ or Python project.

2D RGB Liveness Estimation (C++/Python)

1. Creating a Liveness Estimator

1.1 To create a Liveness Estimator, follow steps 1-3 described in Creating a Processing Block and specify the values:

  • "LIVENESS_ESTIMATOR" for the "unit_type" key;
  • An empty string "" for the "model_path" key.
configCtx["unit_type"] = "LIVENESS_ESTIMATOR";
configCtx["model_path"] = "";

// optional, default values are specified after "="
// paths specified for examples located in <sdk_dir>/bin
configCtx["sdk_path"] = "..";
configCtx["capturer_config_name"] = "common_capturer_uld_fda.xml";
configCtx["config_name"] = "liveness_2d_estimator_v3.xml";
configCtx["facerec_conf_dir"] = sdk_dir + "/conf/facerec";
configCtx["dll_path"] = "facerec.dll"; // for Windows
// or
configCtx["dll_path"] = sdk_dir + "/lib/libfacerec.so"; // for Linux

Lists of existing configuration files can be found in the sections:

1.2 Create a Liveness Estimator Processing Block:

pbio::ProcessingBlock livenessEstimator = service->createProcessingBlock(configCtx);

2. Liveness Estimation

2.1 Create a Context container ioData for input-output data using the createContext() method:

auto ioData = service->createContext();

2.2 Create a Context container imgCtx with RGB-image following the steps described on Creating a Context container with RGB-image.

# copy an image into the binary format 
input_rawimg = image.tobytes()
# put an image into the container
imageCtx = {
"blob": input_rawimg,
"dtype": "uint8_t",
"format": "NDARRAY",
"shape": [dim for dim in img.shape]
}

2.3 Put input image to the input-output data container:

ioData["image"] = imgCtx;

2.4 Call livenessEstimator and pass Context-container ioData that contains an image:

livenessEstimator(ioData);

Accurate estimation requires only one person's face in the frame, looking at the camera, otherwise the status "MULTIPLE_FACE_FRAMED" will be returned.

If multiple faces are captured, only one of them (order is not guaranteed) will be processed.

The result of calling livenessEstimator() will be appended to ioData container. The format of the output data is presented as a list of objects with the "objects" key. Each object in the list has the "class" key with the "face" value.

The "liveness" key contains a Context with 3 elements:

  • "confidence" key contains a number of type double in the range of [0,1]
  • "info" key contains a value of type string that matches one of the pbio::Liveness2DEstimator::Liveness state. It doesn't exist if "value" is "REAL"
  • "value" key contains a value of type string that matches one of two states: "REAL" or "FAKE"
/*
{
"objects": [{ "bbox": [x1, y2, x2, y2],
"class": "face",
"id": {"type": "long", "minimum": 0},
"liveness": {
"confidence": {"type": "double", "minimum": 0, "maximum": 1},
"info": {
"enum": [
"FACE_NOT_FULLY_FRAMED", "MULTIPLE_FACE_FRAMED",
"FACE_TURNED_RIGHT", "FACE_TURNED_LEFT", "FACE_TURNED_UP",
"FACE_TURNED_DOWN", "BAD_IMAGE_LIGHTING", "BAD_IMAGE_NOISE",
"BAD_IMAGE_BLUR", "BAD_IMAGE_FLARE", "NOT_COMPUTED"
]
},
"value": {
"enum": ["REAL", "FAKE"]
}
}
}]
}
*/

3. GPU Acceleration

Liveness Estimator doesn't support GPU acceleration.