Skip to main content
Version: 3.25.2 (latest)

Flutter

Face SDK provides a plugin for Flutter that allows to implement the following features:

  • Face detection in photo
  • Face tracking in video
  • Active Liveness checking
  • Face verification

The plugin is developed for iOS and Android devices.

note

Flutter Sample with the Face SDK plugin is available in the examples/flutter/demo directory of the Face SDK distribution.

Connecting Face SDK plugin to flutter project

Requirements

  • Flutter 3.27.0 ≤ version ≤ 3.27.3
  • Dart 3.6.0 ≤ version ≤ 3.6.1
  • Android Studio for Android or XCode for iOS
  • Android or iOS device

Plugin connection

  1. To connect the Face SDK to flutter project, install the "flutter" component using the Face SDK installer or maintenancetool utility:

    • If Face SDK is not installed, follow the installation instructions Getting Started. The "flutter" component must be selected in the "Selection Components" section.

    • If Face SDK is installed without "flutter" component (flutter directory is not present in Face SDK root directory), use the utility maintenancetool and install the "flutter" component, by selecting it in the "Selection Components" section.

  1. Add plugins to project dependencies by specifying them in the file <project_dir>/pubspec.yaml:

    • face_sdk_3divi, specifying the path to the plugin directory in the path field

      dependencies:
      flutter:
      sdk: flutter
      face_sdk_3divi:
      path: ../flutter/face_sdk_3divi
  1. Add the libfacerec.so library to dependencies

    3.a. For Android device:

    • specify the path of directory with the libfacerec.so library to sourceSets block of build.gradle file (<project_dir>/android/app/build.gradle)
      android {
      sourceSets {
      main {
      jniLibs.srcDirs = ["${projectDir}/../../assets/lib"]
      }
      }
      }
    • add a loading of native library inside MainActivity.kt (<project_dir>/android/app/src/main/kotlin/<android_app_name>/MainActivity.kt):
     import android.content.pm.ApplicationInfo
    import io.flutter.embedding.android.FlutterActivity
    import io.flutter.embedding.engine.FlutterEngine
    import io.flutter.plugin.common.MethodChannel

    class MainActivity : FlutterActivity() {

    companion object {
    init {
    System.loadLibrary("facerec")
    }

    private const val CHANNEL = "samples.flutter.dev/facesdk"
    }

    private fun getNativeLibDir(): String {
    return applicationInfo.nativeLibraryDir
    }

    override fun configureFlutterEngine(flutterEngine: FlutterEngine) {
    super.configureFlutterEngine(flutterEngine)
    MethodChannel(flutterEngine.dartExecutor.binaryMessenger, CHANNEL)
    .setMethodCallHandler { call, result ->
    when (call.method) {
    "getNativeLibDir" -> result.success(getNativeLibDir())
    else -> result.notImplemented()
    }
    }
    }
    }

    3.b. For iOS device:

    • open ios/Runner.xcworkspace in XCode
    • in Target Navigator select "Runner", go to "General" tab, "Frameworks, Libraries, and Embedded Content" section and click "+". In the opened window "Add Other..."->"Add Files" and select facerec.framework in Finder
    • Remove facerec.framework in "Build Phases" tab, "Link Binary With Libraries" section
  1. Add directories and files from the Face SDK distributive to the application assets:
    • Create directory <project_dir>/assets (if not present)
    • Copy the lib directory from the flutter directory to <project_dir>/assets
    • Copy the required files from the conf and share directories to <project_dir>/assets/conf and <project_dir>/assets/share
    • Create directory <project_dir>/assets/license
    • Copy the license file 3divi_face_sdk.lic to the directory <project_dir>/assets/license
  1. Specify a list of directories and files in <project_dir>/pubspec.yaml, for example:

     flutter
    assets:
    - assets/conf/facerec/
    - assets/license/3divi_face_sdk.lic
    - assets/share/face_quality/
    - assets/share/faceanalysis/
    - assets/share/facedetectors/blf/
    - assets/share/faceattributes/
    - assets/share/fda/
    - assets/share/facerec/recognizers/method12v30/
    note

    Flutter does not copy directories recursively, so need to specify each directory with files.

  2. Add the import of the face_sdk_3divi module to the application, as well as the necessary additional modules:

     import 'package:face_sdk_3divi/face_sdk_3divi.dart';
    import 'package:path_provider/path_provider.dart';
    import 'package:flutter/services.dart' show rootBundle;
    import 'dart:io';
    import 'dart:convert';
    import "dart:typed_data";

Working with the plugin

Working with the plugin begins by initializing the FacerecService, which will allow to create other Face SDK primitives for face detection, tracking and comparison.

An example of initializing the FacerecService object in the main()function:

Future<void> main() async {
runApp(MyApp());
FacerecService facerecService = await FacerecService.createService();
}

Basic primitives

Context

Working with primitives in Face SDK is based on JSON-like structures.

The Context class is initialized with a unit_type to create a ProcessingBlock and allows overriding its parameters (modification, version, minimum score for detected faces). Additionally, Context serves as a container for the results of invoking the process method on the ProcessingBlock.

ProcessingBlock

The ProcessingBlock class is used for image processing.

Detecting faces in images

To detect faces in images, the ProcessingBlock component with the unit_type set to FACE_DETECTOR is used in Face SDK. To create it, call the FacerecService.createContext method on an initialized FacerecService object and assign the value FACE_DETECTOR to the unit_type field:

ProcessingBlock faceDetector = facerecService.createProcessingBlock({"unit_type": "FACE_DETECTOR"});

To get detections, use the ProcessingBlock.process method, which takes an encoded image:

Uint8List imgBytes = File(filePath).readAsBytesSync(); // reading a file from storage
Context data = facerecService.createContextFromEncodedImage(imgBytes); // creating container with image
faceDetector.process(data); // get detections
Context objects = data["objects"]; // detection results

data.dispose(); // delete Context object

To receive detections from a device's camera, use the CameraController.takePicture, method, which saves an image in the device memory. Therefore, the image must be loaded first (the saved image can be deleted later):

XFile file = await cameraController.takePicture(); // take photo
Uint8List imgBytes = File(file.path).readAsBytesSync(); // load photo
Context data = facerecService.createContextFromEncodedImage(imgBytes); // creating container with image
faceDetector.process(data); // get detections
Context objects = data["objects"]; // detection results
File(file.path).delete(); // delete file

data.dispose(); // delete Context object

More information about CameraController can be found at the link.
Using the CameraController in Flutter is detailed at the link.

To cut face from an image cutFaceFromImageBytes can be used:

Context bbox = objects[0]["bbox"];
double x1 = bbox[0].get_value() * imageWidth; // top left x
double y1 = bbox[1].get_value() * imageHeight; // top left y
double x2 = bbox[2].get_value() * imageWidth; // bottom right x
double y2 = bbox[3].get_value() * imageHeight; // bottom right y

Rectangle rect = Rectangle(x1.toInt(), y1.toInt(), (x2 - x1).toInt(), (y2 - y1).toInt());
Image _cropImg = await cutFaceFromImageBytes(imgBytes, rect);

Face tracking on video sequence and Active Liveness

To track faces and perform an Active Liveness check on a video sequence, the VideoWorker object is used.

Procedure for using the VideoWorker object:

  1. create a VideoWorker object
  2. get frames from the camera (for example, through cameraController.startImageStream), then pass them directly to the VideoWorker via the VideoWorker.addVideoFrame method or save the frames to a variable and call VideoWorker.addVideoFrame (for example, wrapped in a looped StreamBuilder function)
  3. get the processing results from VideoWorker by calling the function VideoWorker.poolTrackResults

1. Creating a VideoWorker object

Use a VideoWorker object to track faces on a video sequence and perform an Active Liveness check.

To create a VideoWorker, call the FacerecService.createVideoWorker method, which takes the VideoWorkerParams structure containing initialization parameters:

List<ActiveLivenessCheckType> checks = [
ActiveLivenessCheckType.TURN_LEFT,
ActiveLivenessCheckType.SMILE,
ActiveLivenessCheckType.TURN_DOWN,
];

VideoWorker videoWorker = facerecService.createVideoWorker(
VideoWorkerParams()
.video_worker_config(
Config("video_worker_fdatracker_blf_fda_front.xml")
.overrideParameter("base_angle", getBaseAngle(cameraController))
.overrideParameter("enable_active_liveness", 1)
.overrideParameter("active_liveness.apply_horizontal_flip", 0))
.active_liveness_checks_order(checks)
.streams_count(1));

The set of Active Liveness checks is defined when the active_liveness_checks_order property is initialized, which a list of actions is passed - a scenario of checking (example given above).

Available Active Liveness checks:

  • TURN_LEFT
  • SMILE
  • TURN_DOWN
  • TURN_RIGHT
  • TURN_UP
  • BLINK
note

The front camera image of iOS devices is horizontally mirrored - in this case it is necessary to set the value "1" for the parameter active_liveness.apply_horizontal_flip.

2. Video frame processing in VideoWorker

To process a video sequence, it is necessary to transfer frames to the VideoWorker using the VideoWorker.addVideoFrame method. The VideoWorker accepts frames as an array of RawImageF pixels. Supported color models: RGB, BGR, YUV.

Frames can be retrieved through callback ImageStream:

Example of calling addVideoFrame
cameraController.startImageStream((CameraImage img) async{
if(!mounted)
return;
int time = new DateTime.now().millisecondsSinceEpoch;
final rawImg = facerecService.createRawImageFromCameraImage(img, getBaseAngle(cameraController));
videoWorker.addVideoFrame(rawImg, time);
rawImg.dispose();
});

For independent work of ImageStream and VideoWorker (call to addVideoFrame should not block the video stream) a StreamBuilder can be used to call the addVideoFrame function asynchronously.

Example of calling addVideoFrame with StreamBuilder

Image stream callback (saving the image and timestamp to global variables):

int _lastImgTimestamp = 0;
CameraImage? _lastImg;

cameraController.startImageStream((CameraImage img) async{
if(!mounted)
return;
int startTime = new DateTime.now().millisecondsSinceEpoch;
setState(() {
_lastImgTimestamp = startTime;
_lastImg = img;
});
});

Asynchronous function for transferring frames in VideoWorker:

Stream<List<dynamic>> addVF(int prev_time) async* {
final time = _lastImgTimestamp;
var img = _lastImg;
if (!mounted || img == null){
await Future.delayed(const Duration(milliseconds: 50));
yield* addVF(time);
}
final rawImg = facerecService.createRawImageFromCameraImage(img!, getBaseAngle(cameraController))
videoWorker.addVideoFrame(rawImg, time);
await Future.delayed(const Duration(milliseconds: 50));
rawImg.dispose();
yield* addVF(time);
}

Widget (can be combined with any other):

StreamBuilder(
stream: addVF(0),
builder: (context, snapshot){return Text("");},
),

3. Retrieving tracking results

The VideoWorker.poolTrackResults method is used in order to get the results of the VideoWorker operations. This method will return structure with data on the currently tracked persons.

final callbackData = videoWorker.poolTrackResults();
List<RawSample> rawSamples = callbackData.tracking_callback_data.samples;

Active Liveness status is contained in TrackingData.tracking_callback_data:

List<ActiveLivenessStatus> activeLiveness = callbackData.tracking_callback_data.samples_active_liveness_status;
Example of implementing Active Liveness checks

Definition of Active Liveness status:

bool livenessFailed = False;
bool livenessPassed = False;

String activeLivenessStatusParse(ActiveLivenessStatus status, Angles angles){
Straing alAction = '';
if (status.verdict == ActiveLiveness.WAITING_FACE_ALIGN) {
alAction = 'Please, look at the camera';
if (angles.yaw > 10)
alAction += ' (turn face →)';
else if (angles.yaw < -10)
alAction += ' (turn face ←)';
else if (angles.pitch > 10)
alAction += ' (turn face ↓)';
else if (angles.pitch < -10)
alAction += ' (turn face ↑)';
}
else if (status.verdict == ActiveLiveness.CHECK_FAIL) {
alAction = 'Active liveness check FAILED';
livenessFailed = true;
_videoWorker.resetTrackerOnStream();
}
else if (status.verdict == ActiveLiveness.ALL_CHECKS_PASSED) {
alAction = 'Active liveness check PASSED';
livenessPassed = true;
_videoWorker.resetTrackerOnStream(); // To get the best shot of face
}
else if (status.verdict == ActiveLiveness.IN_PROGRESS) {
if (status.check_type == ActiveLivenessCheckType.BLINK)
alAction = 'Blink';
else if (status.check_type == ActiveLivenessCheckType.SMILE)
alAction = 'Smile';
else if (status.check_type == ActiveLivenessCheckType.TURN_DOWN)
alAction = 'Turn face down';
else if (status.check_type == ActiveLivenessCheckType.TURN_LEFT) {
if (Platform.isIOS)
alAction = 'Turn face right';
else
alAction = 'Turn face left';
} else if (status.check_type == ActiveLivenessCheckType.TURN_RIGHT) {
if (Platform.isIOS)
alAction = 'Turn face left';
else
alAction = 'Turn face right';
} else if (status.check_type == ActiveLivenessCheckType.TURN_UP)
alAction = 'Turn face up';
}
else if (status.verdict == ActiveLiveness.NOT_COMPUTED)
alAction = 'Active liveness disabled';
return alAction;
}

Retrieving tracking results:

Straing activeLivenessAction = '';
int progress = 0;
Stream<String> pool() async* {
if (!mounted){
await Future.delayed(const Duration(milliseconds: 50));
yield* pool();
}
final callbackData = _videoWorker.poolTrackResults();
final rawSamples = callbackData.tracking_callback_data.samples;
int progress = livenessProgress;
if (!livenessFailed && !livenessPassed) {
if (callbackData.tracking_callback_data.samples.length == 1) {
ActiveLivenessStatus status = callbackData.tracking_callback_data.samples_active_liveness_status[0];
Angles angles = rawSamples[0].getAngles();
activeLivenessAction = activeLivenessStatusParse(status, angles);
progress = (status.progress_level * 100).toInt();
}
else if (callbackData.tracking_callback_data.samples.length > 1) {
progress = 0;
activeLivenessAction = "Leave one face in the frame ";
}
else {
progress = 0;
activeLivenessAction = "";
}
}
rawSamples.forEach((element) => element.dispose());
setState(() {
livenessProgress = progress;
});
await Future.delayed(const Duration(milliseconds: 50));
yield* pool();
}

Widget (can be combined with any other):

StreamBuilder(
stream: pool(),
builder: (context, snapshot){
return Transform.translate(
offset: Offset(0, 100),
child: Text(activeLivenessAction,
style: new TextStyle(fontSize: 20, backgroundColor: Colors.black))
);
}
),

4. Retrieving the best shot after completing Active Liveness

For retrieving the best shot of the face, call method VideoWorker.resetTrackerOnStream after successfully passing Active Liveness checks. Method resets tracker state and activates LostTrackingData in VideoWorker. The LostTrackingData callback returns the best face shot, which can be used to create a template for a face - Template.

final callbackData = videoWorker.poolTrackResults();
if (callbackData.tracking_lost_callback_data.best_quality_sample != null){
final best_shot = callbackData.tracking_lost_callback_data.best_quality_sample;
final face_template_vw = recognizer.processing(best_shot);
}

Further, face_template_vw can be used to compare with other templates and get a similarity score.

Example of obtaining a face template for its subsequent comparison with the NID

After calling the videoWorker.poolTrackResults function (example given above) the field best_quality_sample will be set. Ypu can use it to get face template:

Template? face_template_vw;
Stream<String> pool() async* {
// .....
// pooling results (example given above)
final best_quality_sample = callbackData.tracking_lost_callback_data.best_quality_sample;
if (face_template_vw == null && livenessPassed && best_quality_sample != null){
face_template_vw = recognizer.processing(best_quality_sample!);
}
setState(() {
if (livenessFailed ){
// liveness fail
}
else (livenessPassed && templ != null){
// livenss passed, return face_template_vw for face comparing
}
});
}

To get a face photo, saving the best CameraImage is needed as well as updating when higher quality was obtained:

double best_quality = -1e-10;
CamerImage bestImage;
Rectangle bestRect;

Stream<String> pool() async* {
// ... pool and process tracking results ...
if (callbackData.tracking_callback_data.samples.length == 1) {
final sample = callbackData.tracking_callback_data.samples[0];
if (best_quality < callbackData.tracking_callback_data.samples_quality[0]) {
best_quality = callbackData.tracking_callback_data.samples_quality[0];
bestImage = _lastImg;
bestRect = sample.getRectangle();;
}
// ....
}
}

The method cutFaceFromCameraImage can be used to cut face from an image:

Image cut_face_img = cutFaceFromCameraImage(bestImage, bestRect);

note

Example of a widget that uses the VideoWorker object and checks Active Liveness from the front camera, can be found in examples/flutter/demo/lib/video.dart.

Verification of faces

To build face templates, the ProcessingBlock with the unit_type set to FACE_TEMPLATE_EXTRACTOR is used. This object is created by calling the FacerecService.createProcessingBlock method, which requires a Context argument to be passed

ProcessingBlock faceTemplateExtractor = facerecService.createProcessingBlock({"unit_type": "FACE_TEMPLATE_EXTRACTOR", "modification": "30m"});

The order of performing operations when comparing faces:

  • Face detection
  • Building keypoints
  • Building a face template
  • Comparison of the face template with other templates

An example of the implementation of comparing two faces (it is assumed that all the necessary Face SDK objects have been created and each image has one face):

// Getting the template for the first face
Uint8List imgB1 = File(filePath).readAsBytesSync();
Context data1 = facerecService.createContextFromEncodedImage(imgB1);
faceDetector.process(data1);
faceFitter.process(data1); // unit_type FACE_FITTER
faceTemplateExtractor.process(data1)
ContextTemplate templ1 = data1["objects"][0]["face_template"]["template"].get_value();

// Getting the template for the second face
Uint8List imgB2 = File(filePath).readAsBytesSync();
Context data2 = facerecService.createContextFromEncodedImage(imgB2);
faceDetector.process(data2);
faceFitter.process(data2); // unit_type FACE_FITTER
faceTemplateExtractor.process(data2)
ContextTemplate templ2 = data2["objects"][0]["face_template"]["template"].get_value();

// Comparing faces
ProcessingBlock verificationModule = facerecService.createProcessingBlock({"unit_type": "VERIFICATION_MODULE", "modification": "30m"});
Context verificationData = facerecService.createContext({"template1": templ1, "template2": templ2});

verificationModule.process(verificationData);

Context result = verificationData["result"];

print("Score: ${result["score"].get_value()}");

data1.dispose();
data2.dispose();
verificationData.dispose();

Comparison of the face on the document and the person who passed the Active Liveness check

To compare the face on the document and the person who passed the Active Liveness check, need to build templates of these faces.

  • Face detection on the document and construction of the face_template_idcard template:
XFile file = await cameraController.takePicture(); // take photo
Uint8List img_bytes = File(file.path).readAsBytesSync(); // load photo
List<RawSample> detections = capturer.capture(img_bytes); // get detections
File(file.path).delete(); // delete file
Template face_template_idcard = recognizer.processing(detections[0]); // Only one face is expected on the photo
  • Getting the face template face_template_vw from the object VideoWorker after passing the Active Liveness check (example given above)

  • Comparison of templates face_template_idcard and face_template_vw using the method Recognizer.verifyMatch:

MatchResult match = recognizer.verifyMatch(face_template_idcard, face_template_vw);
double similarity_score = match.score;