Skip to main content
Version: 1.16.0 (latest)

Web-component

Overview

The web-component is a key part of the BAF system, the user interface used to collect data about the user and their device, perform biometric checks and send data to the server. It is integrated into the existing system on the interface side and serves as an additional level of the user's biometric verification.

Install the web-component

The web-component is supplied as a TGZ tdvc-face-onboarding archive. It also requires the tdvc library, also supplied separately as an archive.

  1. Move the archives to the root folder of your project and add the following lines to your package.json in the dependencies section:

    "@tdvc/face-onboarding": "file:tdvc-face-onboarding-{version}.tgz"

    The archive version may vary. An example of the final package file.json:

    "dependencies": {
    "@tdvc/face-onboarding": "file:tdvc-face-onboarding-1.0.0.tgz"
    }
  2. Call the `npm install' command, which will install the library in your project.

  3. For the web-component to work correctly, you need to add a number of files to the directory where your project's static resources are stored, usually the public directory. After installing the package, move the images and networks folders from /node_modules/@tdvc/face-onboarding/ and frame_handler_worker.js from /node_modules/@tdvc/face-onboarding/dist to the public directory.

Import and use the web-component

  1. Import the library and the styles for it.

  2. Create the web-component integration on the server.

  3. Define the configuration of the web-component.

  4. Launch the project.

Initialization example:

import tdvc, {
ComponentSettingsFromClient,
TDVAthorizationOnboarding,
TDVRegistrationOnboarding,
} from '@tdvc/face-onboarding';

let lib: TDVRegistrationOnboarding | TDVAthorizationOnboarding;

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '225c74bb-4eb1-4c81-9199-832dff3806eb',
};

function run() {
if (window.location.pathname === '/registration') {
lib = new tdvc.Register(config);
} else {
lib = new tdvc.Authorization(config);
}
}

window.onload = async () => {
run();
};

window.onbeforeunload = () => {
lib?.destroy();
};

How it works

The operation of the web-component is determined by its configuration settings and can change accordingly. The complete algorithm looks like this:

  1. The web-component is being initialized

    The web-component receives configuration from the front-end of the project, in which it is embedded and from the BAF server via its API. Once the configuration is received from the server, the configurations are combined into one.

    If the same parameter is defined in both the client and server configurations, the value from the client configuration takes precedence.

    After merging the configurations, the web-component initializes the necessary services for operation, including the face detector, video recording modules, and other required components.

  2. The web-component determines if the user exists in the BAF system and checks their current status

    The user is identified using a unique user ID, which can either be transmitted from an external system or defined directly within the web-component via a form where the user enters their information. Once identified, the user’s status is checked in the BAF system. If the status is valid, the user is allowed to proceed; otherwise, the web-component displays an error message indicating the invalid status and blocks further access.

    If the unique user ID is provided through the configuration, the form will be skipped, and the status check will be performed immediately.

  3. The web-component gets access to the webcam of the user's device

    The web-component requests access to the user’s webcam. If a device is found and access is granted, it displays the video stream; otherwise, it shows an error message. If multiple devices are available, the user is given the option to select which one to use.

  4. The web-component generate device fingerprint

    The web-component collects data about the device and the environment to generate a unique fingerprint of the device. This fingerprint is used to increase the level of security when using BAF. The information is collected using the Browser API and includes device data, browser data, geolocation, which is collected in two ways, available Browser API, Internet connection data, and more.

  5. "Motion Control" biometric check

    The web-component generates a series of commands for the user to perform in front of the camera. These commands include: turning the head to the left, turning the head to the right, raising the head, moving closer to the camera, and moving away from the camera.

    During this check, frames containing the user’s face are captured. These frames are then used for additional server-side checks, such as facial liveness detection, image quality assessment, and others.

  6. Biometric verification of the user's face

    This is an auxiliary stage that is activated when the "Motion Control" biometric check is disabled.

    The web-component captures the user’s head position and extracts frames containing the user’s face, which are then used for additional server-side checks, such as facial liveness, image quality, and others.

  7. Validation of the collected data and the results of biometric checks

    The web-component generates a fingerprint of the user's device, sends all the collected data to the server and receives the user's path verification in response.

Web-component settings

General settings

  • The mountElement field is used to specify the ID of the HTML element in which the web-component will be embedded. When setting the value, make sure that the element with this identifier exists in the DOM, otherwise initialization will be interrupted and an error will be generated, which can be viewed in the browser terminal.

  • The integrationId field is used to interact with the server in order to receive settings and determine from which integration the data comes. The value passed must be a valid UUID4 string. This value must be taken from the BAF dashboard from the integration section.

  • The baseUrl field is used to specify the URL address of BAF API. If you pass the value "/", then requests will be sent to the host on which the web-component is deployed. Value must be a valid URL address.

Example of config with web-component's general settings

import { ComponentSettingsFromClient } from '@tdvc/face-onboarding';

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '225c74bb-4eb1-4c81-9199-832dff3806eb',
};

Face detector settings

  • The networksPath field is used to determine the path to the resources that are needed to initialize the face detector. The default value is '/networks/'.

  • The faceModelSettings field is used to define face detector settings. Contains settings modelEnabled, timeToStartRecord и angleСalculation.

    • The modelEnabled field is used to turn the face detector on or off. If the detector is turned off, all processes associated with detections will be turned off. For example, if detection is turned off, the Motion Control check will be skipped, instead of dynamically determining the position of the face, static will be used.

    • The timeToStartRecord field is used when the detector is off to give the user time to take the required position in the frame before starting biometric checks. Specifies the time in milliseconds.

    • The angleСalculation field is used to calculate the angles of rotation of the face and determine the rotation of the face. Contains setting angles.

      • The angles field is used to define the boundary values for the face position. Contains settings left, right и up.

        • The left field is used to determine at what rotation angle the face of user the component should assume that the head of user is turned to the left.

        • The right field is used to determine at what rotation angle the face of user the component should assume that the head of user is turned to the right.

        • The up field is used to determine at what rotation angle the face of user the component should assume that the head of user is lift up.

Example of config with face detector settings

import { ComponentSettingsFromClient } from '@tdvc/face-onboarding';

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '225c74bb-4eb1-4c81-9199-832dff3806eb',
networksPath: "/other_networks_location/",
faceModelSettings: {
modelEnabled: true,
timeToStartRecord: 10_000,
angleСalculation: {
angles: {
left: 25,
right: 25,
up: 25,
}
}
},
};

Applicant settings

  • The applicantId field is used to determine applicator status without having to enter applicator data. Must contain valid UUID4 string.\

  • The applicantFields field determines what data will need to be entered to determine the applicant ID and his status. If 'applicantId 'is explicitly specified in the configuration, then data entry will be ignored and the status will be determined by the passed UUID. May contain field: firstName, lastName, phone, email and referenceId.

Each field contains enabled and primary settings. At least one of these fields must be enabled and only one field must be primary.

  • The enabled field is used to enable or disable the display of a field in the applicant's data entry form.

  • The primary field is used to determine the key field for which the applicant will be searched in the database.

Example applicantId setting

import { ComponentSettingsFromClient } from '@tdvc/face-onboarding';

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '225c74bb-4eb1-4c81-9199-832dff3806eb',
applicantId: '08ee8a87-ff47-4346-8568-ad7c3de62d35',
};

Example applicantFields setting

import { ComponentSettingsFromClient } from '@tdvc/face-onboarding';
const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '225c74bb-4eb1-4c81-9199-832dff3806eb',
applicantFields: {
email: {
enabled: true,
primary: true,
},
phone: {
enabled: true,
},
firstName: {
enabled: false,
},
lastName: {
enabled: false,
},
referenceId: {
enabled: false,
},
},
};

Camera settings

  • The cameraSettings field is used to define camera settings and contains cameraResolution, cameraId, autoSubmit and permissionInBrowserTimeout settings.

    • The cameraResolution field is used to adjust the camera resolution. The higher the resolution, the more device resources will be used when processing frames. The setting takes the values "fhd" for Full HD, "hd" for HD image, and "sd" for SD image.

    • The cameraId field is used to specify the ID of a specific camera to be used. It may be useful if the camera has multiple operating modes and you need to use a specific mode, or if the device has multiple cameras and you need to use a specific one.

    • The autoSubmit field is used to skip camera selection if the device has more than one camera. Note that if true, the web-component will use the first available camera, and the order of the cameras may change.

    • The permissionInBrowserTimeout field is used to determine how long to wait for confirmation of camera access. The default is 30 000 milliseconds. When the set time expires, if the user does not confirm access to the camera, an 'Not allowed access to camera' error will be generated. If set to 0, there will be no time limit on confirmations.

Example config with cameraSettings

import { ComponentSettingsFromClient, CameraResolutions } from '@tdvc/face-onboarding';
const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '225c74bb-4eb1-4c81-9199-832dff3806eb',
cameraSettings: {
autoSubmit: true,
cameraResolution: CameraResolutions.HD,
permissionInBrowserTimeout:10_000
},
};

Example config with cameraId setting

import { ComponentSettingsFromClient, CameraResolutions } from '@tdvc/face-onboarding';
const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '225c74bb-4eb1-4c81-9199-832dff3806eb',
cameraSettings: {
cameraId: '3eef5faa7a2f81c50e5fc30c2362bc4be0d208c86a8cdb0642f2194cc25492ac',
},
};

Device fingerprint settings

  • The fingerprintWaitTime field is used to determine the maximum waiting time when collecting device data. Specifies the time in milliseconds. Keep in mind that when waiting times decrease, the accuracy of the data may be lower.

Example of config with device fingerprint generation settings

import { ComponentSettingsFromClient } from '@tdvc/face-onboarding';

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '225c74bb-4eb1-4c81-9199-832dff3806eb',
fingerprintWaitTime: 30_000,
};

Motion Control settings

  • The motionControl field is used to define the settings of Motion Control and contains enabled, attemptsCount, faceBorder, imagesHints, description settings.

    • The enabled field is used to enable or disable Motion Control. The default value is true.

    • The attemptsCount field is used to set the allowed number of Motion Control attempts. The default value is 3.

    • The faceBorder field contains settings for determining the position of the person during the check. The face frame can be calculated based on face detection (dynamic mode) or based on video stream resolution (static mode). The parameter contains allowableAccuracyError, faceWidthCoefficients and autodetected settings.

      • The allowableAccuracyError field is used to for calculating the size error between the detected face and the face border, and contains x and y settings. The values are in percent from 1 to 100 and the recommended ratio is 2/3. By default x is 20, y is 30,

      • The 'faceWidthCoefficients' field is used to calculate the size of the face frame based on the width of the maximum camera resolution. That is, the width of the face frame is calculated as the resolution width of the camera divided by 100 and multiplied by the coefficient corresponding to the resolution of the camera, and the frame height is calculated as the width of the face frame multiplied by 3/2 to maintain a ratio of 2/3. The parameter contains the 'fullHd', 'hd' and 'sd' settings, which contain the resolution factor for a specific resolution as a percentage. By default, 'fullHd' is 20, 'hd' is 24, 'sd' is 33. Used for static mode only.

    -   The `autodetected` field is used to customize the process of determining the initial position of a face using face detection. The parameter contains `enabled`, `frameCheckLimit`, `availableDeviation`, `framePadding` and `faceSize` settings.

- The `enabled` field is used to switch the face frame mode. If true, then the frame will be calculated based on face detection, otherwise based on the camera resolution. The default value is true.

- The `frameCheckLimit` field is used to set the number of frames on which the face should be and it should be in a certain position, taking into account the error, in order to fix the initial position of the face before biometric inspections. Please note that on different devices, the verification time may take different numbers depending on the power of the device. The default value is 60.

- The `availableDeviation` field is used to determine the allowable error that is used when calculating the dynamic face frame, with which the position of the current detected face can deviate from the estimated initial position of the face. The default value is 20.

- The `framePadding` fields is used to determine the distance from frame boundaries at which face detection will reset the starting position determination process. It is necessary to exclude situations when the starting position is at the edge of the frame or outside the frame.. Contains settins `horizontal` and `vertical`. By default `horizontal` is 10, `vertical` is 10.

- The `faceSize` fields is used to determine the minimum and maximum allowable dimensions (in pixel) of the detected face. It is necessary to control the distance of the face in relation to the camera and avoid a situation where the face is too small or too large. Contains settings `min` and `max` each containing `width` and `height` settings. By default `min` is {width: 120,height: 140}, `max` is { width: 360, height: 520};

- The `imagesHints` field is used to configure GIF hints for Motion Control actions. Contains settings `enabled` and `resourcesPath`.

- The `enabled` field is used to enable or disable GIF hints for Motion Control actions. The default value is false.

- The `resourcesPath` field is used to determine the path to the folder containing the GIF image. The folder must contain images named "left", "right", "up", "center", "farther" and "closer" and with the extension gif. The default value is "/images/motion_control_gif_hint/".

- The `description` field is used to to adjust the block display for the Motion Control inspection description. Contain settings `enabled` and `autoSubmit`.

- The `enabled` field is used to enable or disable Motion Control description. The default value is true.

- The `autoSubmit` field is used to configure it to automatically move on after a set period of time. Contain settings `enabled` and `timer`.

- The `enabled` field is used to enable or disable move on after a set period of time. The default value is false.

- The `timer` field is used to set wait timein milliseconds after which the transition will be executed, if the user does not make the transition himself. The value must be greater than 0. The default value is 30 000.

Example config with enabled and attemptsCount settings for Motion Control

import { ComponentSettingsFromClient } from '@tdvc/face-onboarding';

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '78fc6242-a812-4583-b508-078939cc747a',
motionControl: {
enabled: true,
attemptsCount: 3,
},
}

Example config with faceBorder setting for static face border

import { ComponentSettingsFromClient } from '@tdvc/face-onboarding';

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '78fc6242-a812-4583-b508-078939cc747a',
motionControl: {
faceBorder: {
faceWidthCoefficients: {
fullHd: 20,
hd: 24,
sd: 33,
},
allowableAccuracyError: {
x: 20,
y: 30,
},
autodetected: {
enabled: false,
},
},
},
}

Example config with faceBorder setting for dynamic face border

import { ComponentSettingsFromClient } from '@tdvc/face-onboarding';

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '78fc6242-a812-4583-b508-078939cc747a',
motionControl: {
faceBorder: {
autodetected: {
enabled: true,
frameCheckLimit: 60,
availableDeviation: 20,
framePadding: {
horizontal: 10,
vertical: 10,
},
faceSize: {
min: {
width: 120,
height: 140,
},
max: {
width: 360,
height: 520,
},
},
},
},
},
}

Example config with imageHints settings

import { ComponentSettingsFromClient } from '@tdvc/face-onboarding';

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '78fc6242-a812-4583-b508-078939cc747a',
motionControl: {
imagesHints: {
enabled: true,
resourcesPath: "/path_to_images"
}
},
}

Example config with description settings

import { ComponentSettingsFromClient } from '@tdvc/face-onboarding';

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '78fc6242-a812-4583-b508-078939cc747a',
motionControl: {
description: {
enabled: true,
autoSubmit: {
enabled: true,
timer: 30_000,
},
},
},
}

Biometric verification of the user's face settings

The faceBestshotSettings field is used to define the settings of biometric verification of the user's face. The parameter contains faceBorder setting.

The faceBorder field contains settings for determining the position of the person during the check. The face frame can be calculated based on face detection (dynamic mode) or based on video stream resolution (static mode). The parameter contains allowableAccuracyError, faceWidthCoefficients and autodetected settings.

The allowableAccuracyError field is used to for calculating the size error between the detected face and the face border. Contains settings x and y. The values are in percent from 1 to 100 and the recommended ratio is 2/3. By default x is 20, y is 30,

The faceWidthCoefficients field is used to calculate the size of the face border based on the width of the maximum camera resolution. That is, the width of the face border is calculated as the maximum camera width factor for resolution/100, and the height of the frame is calculated as the width of the width of the face border 3/2 to maintain a ratio of 2/3. Contains settings fullHd, hd and sd which contain the factor for resolution for a specific resolution as a percentage. By default fullHd is 20, hd is 24, sd is 33. Used only for static mode.

The autodetected field is used to customize the process of determining the initial position of a face using face detection. The parameter contains enabled, frameCheckLimit, availableDeviation, framePadding and faceSize settings.

The enabled field is used to switch the face frame mode. If true, then the frame will be calculated based on face detection, otherwise based on the camera resolution. By default is true.

The frameCheckLimit field is used to set the number of frames on which the face should be and it should be in a certain position, taking into account the error, in order to fix the initial position of the face before biometric inspections. Please note that on different devices, the verification time may take different numbers depending on the power of the device. The default value is 60.

The availableDeviation field is used to determine the allowable error that is used when calculating the dynamic face frame, with which the position of the current detected face can deviate from the estimated initial position of the face. The default value is 20.

The framePadding fields is used to determine the distance from frame boundaries at which face detection will reset the starting position determination process. It is necessary to exclude situations when the starting position is at the edge of the frame or outside the frame. The parameter contains horizontal and vertical settings. By default horizontal is 10, vertical is 10.

The faceSize fields is used to determine the minimum and maximum allowable dimensions (in pixel) of the detected face. It is necessary to control the distance of the face in relation to the camera and avoid a situation where the face is too small or too large. The parameter contains min and max settings each containing width and height fields. By default min is {width: 120,height: 140}, max is { width: 360, height: 520};

Example config with faceBorder setting for static face border

import { ComponentSettingsFromClient } from '@tdvc/face-onboarding';

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '78fc6242-a812-4583-b508-078939cc747a',
faceBestshotSettings: {
faceBorder: {
faceWidthCoefficients: {
fullHd: 20,
hd: 24,
sd: 33,
},
allowableAccuracyError: {
x: 20,
y: 30,
},
autodetected: {
enabled: false,
},
},
},
}

Example config with faceBorder setting for dynamic face border

import { ComponentSettingsFromClient } from '@tdvc/face-onboarding';

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '78fc6242-a812-4583-b508-078939cc747a',
faceBestshotSettings: {
faceBorder: {
autodetected: {
enabled: true,
frameCheckLimit: 60,
availableDeviation: 20,
framePadding: {
horizontal: 10,
vertical: 10,
},
faceSize: {
min: {
width: 120,
height: 140,
},
max: {
width: 360,
height: 520,
},
},
},
},
},
}

Localization

By default, the web-component supports two localizations: English (en) and Russian (ru), and the default interface language is English. The localization of the component can be changed via configuration at the time of initialization of the web-component. To do this, specify the language and locales fields in the config.

The language field is used to determine the interface language, takes a string type value. If no value is specified or there are no locales for the specified value, then the value “en" will be used.

The locales field is used object for changing default UI texts. When defining custom locales, keep in mind that the component will only use the passed object, so it must specify all the necessary locales for all the required languages. For a more accurate understanding, read the examples below.

Actual locale object for English localization

const en = {
PreparingEnvironment: 'Preparing environment',
MessageCode: 'Message code: ',
SomeError: 'An error occurred, please try again later',

Mode: {
Authorization: 'Authorization',
Registration: 'Registration',
},

ErrorScreen: {
TryAgainButton: 'Try again',
},

Stages: {
Initialization: {
IdentifyApplicantStatus: {
FormFields: {
Labels: {
FirstName: 'First name',
LastName: 'Last name',
Phone: 'Phone',
Email: 'Email',
ReferenceId: 'ReferenceId',
},
Errors: {
InvalidEmail: 'Invalid email format',
MaxLengthField: 'Maximum field length is 150 characters',
IsRequired: 'This field is required',
WrongPhone: 'Wrong phone number',
},
},
SubmitButton: {
Authorization: 'Continue',
Registration: 'Continue',
},
},

SelectCamera: {
TextHints: {
CheckingWebcamOperation: 'Check the webcam for proper operation and image quality',
},
ContinueButton: 'Continue',
BackButton: 'Back',
},

Description: {
MotionControl: {
Heading: 'Checking Motion Control',
Text: 'To pass the inspection successfully, you need to perform several actions. First, you need to determine the initial position of the face. To do this, fix the face in a position that is convenient for you and so that the mask is displayed for a few seconds while the counter is filling up under the player. After determining the starting position, you need to perform a number of actions in the order generated by the system. Actions are commands: turn your head to the right/left, raise your head, approach/move away. The action is considered completed when the frame around the face changes its color.',
},
},

Errors: {
ApplicantIdWithApplicationFieldsError:
'Configuration error Only one of the two configuration fields must be passed: applicationFields or applicantId',
NoEnabledApplicationFieldsError:
'Configuration error. At least one enabled field in applicationFields is required',
NoPrimaryApplicationFieldsError:
'Configuration error. One primary field is required in applicationFields',
NoPrimaryEnabledApplicationFieldsError:
'Configuration error. The primary field in applicationFields must be enabled',
SeveralPrimaryEnabledApplicationFieldsError:
'Configuration error. Only one primary field is required in applicationFields',
InvalidMotionControlAttemptCountError:
'Configuration error. The motionControl.attemptsCount field must be greater than zero',
EnabledMotionControlWithoutFaceModelError:
'Configuration error. If faceModelSettings.modelEnabled is false, then motionControl.enabled must also be false',
TimeToStartRecordLessThenOneSecondError:
'Configuration error. The timeToStartRecord value must be at least 1000ms',
NoIceCandidatesError:
'Configuration error. At least one ICE candidate must be specified in the component settings on the server',
IceCandidateTimeoutLessThenOneSecondError:
'Configuration error. The checkIceCandidateTimeout value must be at least 1000ms',
FailedFetchOfConfigurationError: 'Failed to retrieve component settings',
NoComponentIdError: 'Integration ID not specified in the component configuration',
NoBaseUrlError: 'Base URL of server not specified in the component configuration',
PrepareEnvironmentForBiometricInspectionTimeoutError:
'The waiting time for the initialization of biometric verification services has been exceeded',
NoRequiredConfigurationFieldsError:
'The settings received from the server do not contain the required data',
InitializationProcessingVideoStremWorkerError: 'Error initializing the video frame processing service',
InitializationFaceDetectionServiceError: 'Error initializing the face detection service',
ApplicantNotFoundError: 'Applicant not found',
ApplicantAlreadyExistError: 'This applicant already exists',
ApplicantBlockedError: 'The applicant is blocked',
ApplicantUnconfirmedError: 'The applicant is unconfirmed',
ApplicantNotRegisterError: 'The application is not registered',
NotSupportedMediaDevicesError: 'The browser does not support the Media Devices API',
AbortAccessToCameraError: 'The attempt to access the camera was aborted',
DocumentIsNotFullyActiveError: 'HTML document is not fully active',
NotAllowedAccessToCameraError: 'Not allowed access to camera',
NotFoundCameraError: 'No camera was found',
NotReadableCameraError: 'The camera is unavailable because it is already in use by another process',
OverconstrainedCameraError: 'No camera satisfying the constraints of the system is found',
CameraSecurityError: 'The HTML document does not meet the minimum security requirements for camera use',
NoVideoTrackError: 'There is no data about the video stream from the camera',
NoCameraCapabilitiesInfoError:
'Information about possible camera settings does not contain the required parameters',
MediaStreamIsUndefinedError: 'There is no video stream data',
WebComponentError: 'An error occurred, please try again later',
},
},

BiomertricalChecks: {
IdentifyFacePosition: {
TextHints: {
MoveFaceOnCenter:
'Please position yourself so that your face is in the center of the circle on the screen',
IDontSeeYou: 'You are not visible',
FaceOutsideFrame:
'Please position yourself so that your face is in the center of the on the screen',
LookAtCamera: 'Please, turn your face to the camera',
LittleFace: 'Move closer to the camera',
BigFace: 'Move further away from the camera',
DontMove: "Please don't move",
CheckPosition: 'Checking position',
TimerBeforeRun: 'Before recording starts',
},
},

MotionControl: {
TextHints: {
AttemptFailed: 'Motion Control attempt failed',
SendingDataToServer: 'Sending data to the server',
Command: {
TurnLeft: 'Turn left',
TurnRight: 'Turn right',
TurnUp: 'Lift your chin up while continuing to look at the screen',
LookAtCenter: 'Look into the camera',
Closer: 'Move closer to the camera',
Farther: 'Move further away from the camera',
Normal: 'Return to the original position',
},
},
},
Errors: {
SlowEnternet: 'Your internet connection is slow. Image quality assessment may take long',
InCorrectCamera: 'The selected camera is not available or does not meet the minimum requirements',
NoCamera: 'No cameras available',
NoPermission:
'Permission to access the camera is not obtained. For further work, allow access to the camera in your browser settings',
MoreFaces: 'Many faces in the frame',
SafariError: 'Unfortunately, your browser is temporarily not supported at the moment',
ServerError: 'The server is temporarily unavailable',
ServerConfigError: 'Error on the server',
NotSupportedApiError: 'Your browser does not support the required function',
TransportError: 'An unexpected error occurred while running the check',
NotSupportedVideoFormatError: "This browser haven't supported needed video mime type",
ExceedMaxMessageSizeError: 'The maximum message size has been exceeded',
VideoStreamResolutionIsUndefinedError: 'The resolution of the video stream is not defined',
InvalidVideoStreamResolutionValueError:
'The resolution of the video stream contains invalid values, the width or height of the video stream cannot be equal to 0',
InvalidVideoPreviewResolutionValueError:
'The resolution of the video preview contains invalid values, the width or height of the video preview cannot be equal to 0',
InvalidFrameDataForDetectionError: 'Incorrect frame data for face detection',
CaptureFaceBestshotTimeoutError: 'Frame collection waiting time exceeded',
InvalidMotionControlPatternError: 'Invalid Motion Control pattern',
NoSupportedVideoCodecError: 'The video codec supported by the system is not detected',
WaitResponseWorkerTimeoutError: 'The waiting time for a response from the worker has expired',
BrowserNotSupportedWorkerApi: "This browser doesn't support Worker API",
DeepfakeValidationError: 'Deepfake check failed',
CameraFpsNotDefinedError: 'The frame rate of the video stream is not defined',
TransmissionTimeoutError: 'The waiting time for a response from the server has been exceeded',
InvalidFacesAmountOnFrameError: 'No face or too many faces found',
InvalidMessageFormatError: 'Invalid message format',
UndefinedLocalizedMessagesError: 'There are no localized messages for the selected language',
InvalidVideoDataError: 'Invalid video data',
WebComponentError: 'An error occurred, please try again later',
120004: 'FPS too low. Please try again',
120005: 'Error on the server. Please try again',
120006: 'Error on the server. Please try again',
120007: 'Error on the server. Please try again',
120008: 'Error on the server. Please try again',
120009: 'Please check the quality of your internet connection and try again',
120044: 'No reference frames found',
120052: 'Error on the server. Please try again',
120053: 'Error on the server. Please try again',
120054: 'Error on the server. Please try again',
120055: 'Error while sending a message via DataChannel',
120057: 'Error in obtaining Motion Control pattern',
120060: 'Error on the server. Please try again',
120061: 'No face or multiple faces detected while taking reference image. Please try again',
170001: 'No active ICE candidates detected',
180001: 'Invalid websocket message format',
180003: 'Requesting a video recording of an unsupported type',
180004: 'There was an error on the server while recording video',
180005: 'Video processing time has exceeded the limit',
180006: 'Error in operation of WebSocket connection',
190002: 'The connection to the server was closed due to an internal error or exceeding the waiting time.',
190003: 'The size of data sent to the server exceeds the allowed size',
190004: 'The check failed due to facial movement',
1100001: 'The device does not support WebGL technology',
1100004: 'The device does not meet the requirements necessary to start the detector',
1100005: 'No face detector in the system',
1300001:
'Poor connection quality, the connection to the server does not meet the necessary requirements',
1300002: 'An error occurred when establishing the connection',
},
},

ValidateFlowResult: {
SendingDataToServer: 'Sending data to the server',
Success: {
Register: 'Registration completed successfully',
Authorize: 'Authorization completed successfully',
},
Errors: {
AntispoofingValidationError: 'Liveness check failed',
RegistrationMatchingFailedError: 'An applicant with such biometrics already exists',
AuthorizationMatchingFailedError: 'No applicant with this biometric was found',
LowImageQualityError: 'Poor image quality',
LivenessReflectionError: 'Liveness Reflection check failed',
NoFacesFound: 'No face found in the image',
FacesDontBelongApplicant: 'Face matching check failed',
MoreFaces: 'Many faces in the frame',
ServerError: 'The server is temporarily unavailable',
ServerConfigError: 'Error on the server',
MCCrossMatchError: 'Facial substitution detected',
LRCrossMatchError: 'Facial substitution detected',
ApplicantInBlackListError: 'The applicant is on the black list',
ApplicantRiskError: 'During validation, the risk triggered',
"Face profile wasn't saved": "Face profile wasn't saved",
'Face profiles not found': 'Face profiles not found',
'Face authorization was failed': 'Face authorization was failed',
'A face was missing or disappeared from the frame when recording liveness reflection video':
'A face was missing or disappeared from the frame when recording liveness reflection video',
'Invalid endeavor info': 'Invalid endeavor info',
'Endeavor liveness reflection info obtain error': 'Error receiving liveness reflection information',
'Endeavor external link not equal to applicant': 'Applicant ID consistency error',
'Endeavor is not calculated': 'Error recording video liveness reflection',
'Endeavor liveness reflection confidence info is null': 'Error calculating liveness reflection',
'Endeavor liveness reflection confidence value is null': 'Error calculating liveness reflection',
'Failed to obtain template from liveness reflection reference image':
'No face in the frame when recording video reflection',
'Failed to obtain template from motion control reference image':
'No face in the frame when recording motion control video',
'Endeavor id is null when required': 'Endeavor id is null when required',
'No faces found on image': 'No faces found on image',
'Multiple faces found on image': 'Multiple faces found on image',
'Multiple faces or strong face movement spotted when recording liveness reflection video':
'Multiple faces or strong face movement spotted when recording liveness reflection video',
'Error when perform cross matching': 'Error when perform cross matching',
'Error on the server': 'Error on the server',
'Liveness reflection video is not captured': 'Liveness reflection video is not captured',
'Liveness reflection video reference template is not captured':
'Liveness reflection video reference template is not captured',
'Motion control video is not captured': 'Motion control video is not captured',
'Motion control video reference template is not captured':
'Motion control video reference template is not captured',
NotSupportedApiError: 'Your browser does not support the required function',
TransportError: 'An unexpected error occurred while running the check',
NotSupportedVideoFormatError: "This browser haven't supported needed video mime type",
ExceedMaxMessageSizeError: 'The maximum message size has been exceeded',
VideoStreamResolutionIsUndefinedError: 'The resolution of the video stream is not defined',
InvalidVideoStreamResolutionValueError:
'The resolution of the video stream contains invalid values, the width or height of the video stream cannot be equal to 0',
InvalidVideoPreviewResolutionValueError:
'The resolution of the video preview contains invalid values, the width or height of the video preview cannot be equal to 0',
InvalidFrameDataForDetectionError: 'Incorrect frame data for face detection',
CaptureFaceBestshotTimeoutError: 'Frame collection waiting time exceeded',
InvalidMotionControlPatternError: 'Invalid Motion Control pattern',
NoSupportedVideoCodecError: 'The video codec supported by the system is not detected',
WaitResponseWorkerTimeoutError: 'The waiting time for a response from the worker has expired',
BrowserNotSupportedWorkerApi: "This browser doesn't support Worker API",
DeepfakeValidationError: 'Deepfake check failed',
CameraFpsNotDefinedError: 'The frame rate of the video stream is not defined',
TransmissionTimeoutError: 'The waiting time for a response from the server has been exceeded',
InvalidFacesAmountOnFrameError: 'No face or too many faces found',
InvalidMessageFormatError: 'Invalid message format',
UndefinedLocalizedMessagesError: 'There are no localized messages for the selected language',
ValidationTimeHasExpiredError: 'Validation time has expired',
ApplicantBlockedError: 'The applicant is blocked',
WebComponentError: 'An error occurred, please try again later',
InvalidEndeavorInfoError: 'The attempt contains invalid data',
120004: 'FPS too low. Please try again',
120005: 'Error on the server. Please try again',
120006: 'Error on the server. Please try again',
120007: 'Error on the server. Please try again',
120008: 'Error on the server. Please try again',
120009: 'Please check the quality of your internet connection and try again',
120029: 'Error on the server. Please try again',
120044: 'No reference frames found',
120052: 'Error on the server. Please try again',
120053: 'Error on the server. Please try again',
120054: 'Error on the server. Please try again',
120055: 'Error while sending a message via DataChannel',
120057: 'Error in obtaining Motion Control pattern',
120060: 'Error on the server. Please try again',
120061: 'No face or multiple faces detected while taking reference image. Please try again',
170001: 'No active ICE candidates detected',
180001: 'Invalid websocket message format',
180003: 'Requesting a video recording of an unsupported type',
180004: 'There was an error on the server while recording video',
180005: 'Video processing time has exceeded the limit',
180006: 'Error in operation of WebSocket connection',
190002: 'The connection to the server was closed due to an internal error or exceeding the waiting time.',
190003: 'The size of data sent to the server exceeds the allowed size',
1300001:
'Poor connection quality, the connection to the server does not meet the necessary requirements',
1300002: 'An error occurred when establishing the connection',
},
},
},
};

An example of changing the English localization to TypeScript:

import tdvc, { ComponentSettingsFromClient } from '@tdvc/face-onboarding';

// Сopy the default locales so that when certain locales are changed, the rest remain available in their original form
const locales = structuredClone(tdvc.DefaultLocales);

// Updating the desired locales
locales.en.Mode.Authorization = 'Sign In';
locales.en.Mode.Registration = 'Sign Up';
locales.en.Stages.Initialization.IdentifyApplicantStatus.FormFields.Labels.FirstName = 'Name';

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '225c74bb-4eb1-4c81-9199-832dff3806eb',
locales,
};

An example of adding a new localization to TypeScript:

To add a new localization, prepare a JavaScript object that will contain all the same fields as the object with the English localization and define the text in the required language for each message identifier. The localization object that can be used as an example can be found in the README.md file in the delivery archive.

import tdvc, { ComponentSettingsFromClient } from '@tdvc/face-onboarding';

// Сopy the default locales so that when certain locales are changed, the rest remain available in their original form
const locales = structuredClone(tdvc.DefaultLocales);

// Defining an object with locales for the desired language, the structure of which must completely match the default locales.
const kkLocales = {
PreparingEnvironment: 'Ортаны дайындау',
MessageCode: 'Хабарлама коды: ',
SomeError: 'Қате орын алды, кейінірек қайта әрекет етіңіз',
Mode: {
Authorization: 'Авторизациялау',
Registration: 'Тіркелу',
},
// ...
};

// We add a new object to the general set of locales, the key of which is the new localization language.
locales['kk'] = kkLocales;

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '225c74bb-4eb1-4c81-9199-832dff3806eb',
language: 'kk',
locales,
};

Interaction between the web-component and an external system

Interactions between the web-component and external systems are handled using callback functions (callbacks). The system determines its event-processing logic based on the data provided by these functions or on the fact that the callback was triggered. Several such functions are defined in the component’s configuration:

  1. onMounted: (() => void) — a function called when the web-component is fully initialized.

  2. onError: ((message: string, code: string) => void) — a function called if an error occurred while traversing the user path.

  3. onUpdate: (() => void) — a function called if, after an error occurs, the user wants to try again to traverse the user's path.

  4. onIdentifyApplicantStatus: (applicant: {applicantId: string, status: number}) => void — a function called after receiving the applicant status from the server.

  5. onBack: (() => void) — a function called if the user clicks on the "Back" button while selecting the camera, and thereby stops going through the user's path.

  6. onMotion: ((type: 'left' | 'right' | 'up' | 'closer' | 'farther' | 'return', result: boolean | undefined) => void) — the function called during the biometric check "Motion Control". If the result value is undefined, then the verification has just begun, if true - successfully passed, if false - failed.

  7. onGetReferenceImages: (referenceImage: string) => void; — the function called after receiving the keyframe. referenceImage is an image in the base64 string format.

  8. onStartValidation: (() => void) — the function called before the validation of the results begins.

  9. onValidate: ((data: any) => void) — the function called after receiving the result of data validation from the server.

An example of using callbacks to TypeScript:

import { ComponentSettingsFromClient } from '@tdvc/face-onboarding';

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '78fc6242-a812-4583-b508-078939cc747a',
callbacks: {
onMounted: () => {
console.log('Component is successfully initilized');
},
onError: (message: string, code: string) => {
console.log('On error callback', message, code);
},
onUpdate: () => {
console.log('On update callback');
},
onIdentifyApplicantStatus: (applicant?: { applicantId: string; status: number }) => {
console.log('On identify applicant status callback', applicant?.applicantId, applicant?.status);
},
onBack: () => {
console.log('On Back callback');
},
onMotion: (
type: 'left' | 'right' | 'up' | 'closer' | 'farther' | 'return',
currentAttemptNumber: number,
result?: boolean
) => {
console.log('On motion callback', type, currentAttemptNumber, result);
},
onGetReferenceImages: (referenceImage: string) => {
console.log('Reference image received');
console.log(referenceImage);
}
onStartValidation: () => {
console.log('On start validation callback');
},
onValidate: async (data) => {
console.log('On validation callback', data);
},
},
}

Interface styling

The main method of styling

The styling of the web-component is done through CSS. Interface elements contain identifiers and CSS classes through which you can customize their appearance. The identifiers and CSS classes used for customization can be found in the @tdvc/face-onboarding/dist/css/style.css file.

Stylization through JavaScript/TypeScript

For customizing complex UI elements, such as face masks based on key points, Motion Control action prompts, and face frames, which can have different shapes, styles, and display logic, a single CSS is not sufficient. Therefore, for more flexible styling, we provide the ability to override the implementation of these elements.

All the elements below use a base class for further implementation. It consists of a canvas element, methods for configuring styles and canvas resolution, and lifecycle methods such as rendering, removal from the DOM, and complete clearing via destroy.

Base canvas class

export type Options = Partial<{
strokeStyle: string;
fillStyle: string;
lineWidth: number;
}>;

export const DEFAULT_CANVAS_SETTINGS = {
strokeStyle: '#000000',
fillStyle: '#000000',
lineWidth: 1,
};

export default abstract class Canvas {
protected _root: HTMLCanvasElement;
protected _context: CanvasRenderingContext2D;
protected _options: Options;
protected _initialOptions: Options;

constructor(id: string, options?: Options) {
this._root = document.createElement('canvas');
this._root.classList.add('tdvc-canvas');
this._root.classList.add(id);

const context = this.root.getContext('2d');
if (!context) {
const elementClasses = this._root.classList
.keys()
.reduce((prev, cur) => (prev === '' ? prev + cur : prev + ' ' + cur), '');
throw new WebComponentError({ message: `2D context for ${elementClasses} is null` });
}

this._context = context;
this._initialOptions = options ?? DEFAULT_CANVAS_SETTINGS;
this.setContextOption({ ...this._initialOptions });
this.applyContextOptions();
}

get root() {
return this._root as Readonly<HTMLCanvasElement>;
}

get options() {
return this._options;
}

get initialOptions() {
return this._initialOptions;
}

setContextOption(options: Options) {
this._options = options;
}

applyContextOptions() {
if (this._options.strokeStyle) this._context.strokeStyle = this._options.strokeStyle;
if (this._options.lineWidth) this._context.lineWidth = this._options.lineWidth;
if (this._options.fillStyle) this._context.fillStyle = this._options.fillStyle;
}

setResolution(width: number, height: number) {
this._root.width = width;
this._root.height = height;
this.applyContextOptions();
}

clear() {
const { width, height } = this._context.canvas;
this._context.clearRect(0, 0, width, height);
}

removeFromDom() {
this._root.remove();
}

destroy() {
if (this._root && this._root.parentNode) this._root.remove();
this._context = null!;
this._root = null!;
}
}

In your implementations you can use both the base class and its derivatives.

Face mask by keypoints

Base implementation

export type FaceKeypointsMaskOptions = Options;

export const DEFAULT_FACE_KEYPOINTS_MASK_OPTIONS: FaceKeypointsMaskOptions = {
strokeStyle: '#32EEDB',
fillStyle: '#32EEDB',
lineWidth: 0.2,
};

export default class FaceKeypointsMask extends Canvas {
protected _isRendering = false;

constructor(options?: FaceKeypointsMaskOptions) {
super('tdvc-face-keypoints-mask', options ?? DEFAULT_FACE_KEYPOINTS_MASK_OPTIONS);
}

get isRendering() {
return this._isRendering;
}

draw(points: Point[]) {
if (this._isRendering || points.length !== 478) return;

this._isRendering = true;

this._context.beginPath();

for (let i = 0; i < TRIANGULATION.length; i += 3) {
const a = points[TRIANGULATION[i]];
const b = points[TRIANGULATION[i + 1]];
const c = points[TRIANGULATION[i + 2]];

this._context.moveTo(a.x, a.y);
this._context.lineTo(b.x, b.y);
this._context.lineTo(c.x, c.y);
}

this._context.stroke();
this._isRendering = false;
}
}

Example of custom implementation

import tdvc, { ComponentSettingsFromClient, Point } from '@tdvc/face-onboarding';

// Hiding the mask so as not to waste resources on display
class NoMask extends tdvc.UiKit.FaceKeypointsMask {
draw(points: Point[]): void {}
}

// Displaying only points instead of triangles
class OnlyPointsFaceKeypointMask extends tdvc.UiKit.FaceKeypointsMask {
constructor() {
super({
...tdvc.UiKit.DEFAULT_FACE_KEYPOINTS_MASK_OPTIONS,
// Color change
fillStyle: 'red',
});
}

draw(points: Point[]): void {
if (this._isRendering || points.length !== 478) return;

this._isRendering = true;

for (const point of points) {
this._context.beginPath();
this._context.arc(point.x, point.y, 1, 0, 2 * Math.PI);
this._context.fill();
}

this._isRendering = false;
}
}

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '78fc6242-a812-4583-b508-078939cc747a',
uiKit: {
FaceKeypointsMask: OnlyPointsFaceKeypointMask,
},
};

Face border

Base implementations

export type FaceBorderOptions = Options;

export const DEFAULT_FACE_BORDER_OPTIONS: FaceBorderOptions = {
strokeStyle: '#ffffff',
fillStyle: 'rgba(255, 255, 255, 0.5)',
lineWidth: 4,
};

export default abstract class FaceBorder extends Canvas {
constructor(options?: FaceBorderOptions) {
super('tdvc-face-position-circle', options ?? DEFAULT_FACE_BORDER_OPTIONS);
}

public draw(point: Point, resolution: Resolution) {
this._drawOverlay();
this._clearFaceArea(point, resolution);
this._drawBorder(point, resolution);
}

protected _drawOverlay() {
this._context.globalCompositeOperation = 'overlay';
this._context.fillRect(0, 0, this._context.canvas.width, this._context.canvas.height);
}

protected _clearFaceArea(point: Point, resolution: Resolution) {
this._context.globalCompositeOperation = 'destination-out';
this._context.fillStyle = 'rgba(0,0,0,1.0)';
this._baseFigure(point, resolution, (this._options?.lineWidth ?? 4) / 2);
this._context.fill();
}

protected _drawBorder(point: Point, resolution: Resolution) {
this._context.beginPath();
this._context.globalCompositeOperation = 'overlay';
this._baseFigure(point, resolution, 0);
this._context.stroke();
}

protected abstract _baseFigure(point: Point, resolution: Resolution, borderWidth: number): void;
}

export class EllipseFaceBorder extends FaceBorder {
constructor(options?: FaceBorderOptions) {
super(options ?? DEFAULT_FACE_BORDER_OPTIONS);
}

protected _baseFigure(point: Point, resolution: Resolution, borderWidth: number): void {
const { rx, ry } = this._calculateEllipseRadiuses(resolution);
this._context.ellipse(point.x, point.y, rx + borderWidth, ry + borderWidth, 0, 0, 2 * Math.PI);
}

protected _calculateEllipseRadiuses(resolution: Resolution) {
return {
rx: resolution.width / 2,
ry: resolution.height / 2,
};
}
}

Example of custom implementation

import tdvc, {
ComponentSettingsFromClient,
Point,
Resolution,
} from '@tdvc/face-onboarding';

let lib: TDVRegistrationOnboarding | TDVAthorizationOnboarding;

// Changing styles for an elliptical face border
class CustomEllipseFaceBorder extends tdvc.UiKit.EllipseFaceBorder {
constructor() {
super({
...tdvc.UiKit.DEFAULT_FACE_BORDER_OPTIONS,
fillStyle: 'rgba(0,0,0,1.0)',
lineWidth: 1,
strokeStyle: 'red',
});
}
}

// Implementation of a face border in the form of a rectangle with rounded corners
class RoundedSquareFaceBorder extends tdvc.UiKit.FaceBorder {
constructor() {
super({
...tdvc.UiKit.DEFAULT_FACE_BORDER_OPTIONS,
fillStyle: 'rgba(0,0,0,1.0)',
});
}

protected _baseFigure(point: Point, resolution: Resolution, borderWidth = 0) {
const offset = Math.floor((resolution.width / 100) * 16);

const { topLeftCorner, bottomLeftCorner, bottomRightCorner, topRightCorner } =
this._calculateRectCornerCoordinates(point, resolution, borderWidth);

this._context.moveTo(topLeftCorner.x, topLeftCorner.y + offset);

this._context.quadraticCurveTo(topLeftCorner.x, topLeftCorner.y, topLeftCorner.x + offset, topLeftCorner.y);
this._context.lineTo(topRightCorner.x - offset, topRightCorner.y);

this._context.quadraticCurveTo(topRightCorner.x, topRightCorner.y, topRightCorner.x, topRightCorner.y + offset);
this._context.lineTo(bottomRightCorner.x, bottomRightCorner.y - offset);

this._context.quadraticCurveTo(
bottomRightCorner.x,
bottomRightCorner.y,
bottomRightCorner.x - offset,
bottomRightCorner.y
);
this._context.lineTo(bottomLeftCorner.x + offset, bottomRightCorner.y);

this._context.quadraticCurveTo(
bottomLeftCorner.x,
bottomLeftCorner.y,
bottomLeftCorner.x,
bottomLeftCorner.y - offset
);
this._context.closePath();
}

protected _calculateRectCornerCoordinates(point: Point, resolution: Resolution, borderWidth = 0) {
const topLeftCorner: Point = {
x: point.x - resolution.width / 2 - borderWidth,
y: point.y - resolution.height / 2 - borderWidth,
};

const topRightCorner: Point = {
x: point.x + resolution.width / 2 + borderWidth,
y: point.y - resolution.height / 2 - borderWidth,
};

const bottomRightCorner: Point = {
x: point.x + resolution.width / 2 + borderWidth,
y: point.y + resolution.height / 2 + borderWidth,
};

const bottomLeftCorner: Point = {
x: point.x - resolution.width / 2 - borderWidth,
y: point.y + resolution.height / 2 + borderWidth,
};

return {
topLeftCorner,
topRightCorner,
bottomRightCorner,
bottomLeftCorner,
};
}
}

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '78fc6242-a812-4583-b508-078939cc747a',
uiKit: {
FaceBorder: RoundedSquareFaceBorder,
},
};

Motion control direction hints

Base implementations

export const DEFAULT_MOTION_CONTROL_DIRECTION_HINTS_OPTIONS = {
lineWidth: 2,
fillStyle: 'rgba(0, 0, 0, 0.5)',
strokeStyle: 'rgba(169, 169, 169, 1)',
};

export default abstract class MotionControlDirectionHints extends Canvas {
constructor(options?: Options) {
super('tdv-motion-control-direction-hints', options ?? DEFAULT_MOTION_CONTROL_DIRECTION_HINTS_OPTIONS);
}

abstract draw(bbox: TBoundingBox, command: MotionControlPattern | 'return', progress: number): void;
}

export class ArrowsMotionControlDirectionHints extends MotionControlDirectionHints {
protected _leftArrow: Path2D = new Path2D();
protected _rightArrow: Path2D = new Path2D();
protected _upArrow: Path2D = new Path2D();
protected _downArrow: Path2D = new Path2D();

protected _baseMargin = 8;
protected _gap = -4;
protected _arrowResolution: Resolution = {
width: 22,
height: 32,
};
protected _halfArrowResolution: Resolution = {
width: this._arrowResolution.width / 2,
height: this._arrowResolution.height / 2,
};

protected _successArrowFillColor = '#17ea4c';
protected _disabledArrowStrokeColor = 'rgba(255,255,255, 1)';
protected _disabledArrowFillColor = `rgba(255, 255, 255, 0.5)`;

constructor(options?: Options) {
super(options);
this._initLeftArrowPath();
this._initRightArrowPath();
this._initUpArrowPath();
this._initDownArrowPath();
}

draw(bbox: TBoundingBox, command: MotionControlPattern | 'return', progress = 0) {
switch (command) {
case 'left':
this._drawHintForLeftAction(bbox, progress);
break;
case 'right':
this._drawHintForRightAction(bbox, progress);
break;
case 'up':
this._drawHintForUpAction(bbox, progress);
break;
case 'closer':
this._drawHintForCloserAction(bbox, progress);
break;
case 'farther':
this._drawHintForFartherAction(bbox, progress);
break;
default:
break;
}
}

protected _drawHintForLeftAction(bbox: TBoundingBox, progress = 0) {
let basePoint;

for (let i = 0; i < 4; i++) {
basePoint = this._getPointForRightPosition(bbox, i);
this._renderArrow(this._rightArrow, basePoint, progress, i);

basePoint = this._getPointForLeftPosition(bbox, i);
this._renderArrow(this._leftArrow, basePoint, 0, i, true);

basePoint = this._getPointForUpPosition(bbox, i);
this._renderArrow(this._upArrow, basePoint, 0, i, true);

basePoint = this._getPointForDownPosition(bbox, i);
this._renderArrow(this._downArrow, basePoint, 0, i, true);
}
}

protected _drawHintForRightAction(bbox: TBoundingBox, progress = 0) {
let basePoint;

for (let i = 0; i < 4; i++) {
basePoint = this._getPointForRightPosition(bbox, i);
this._renderArrow(this._rightArrow, basePoint, 0, i, true);

basePoint = this._getPointForLeftPosition(bbox, i);
this._renderArrow(this._leftArrow, basePoint, progress, i);

basePoint = this._getPointForUpPosition(bbox, i);
this._renderArrow(this._upArrow, basePoint, 0, i, true);

basePoint = this._getPointForDownPosition(bbox, i);
this._renderArrow(this._downArrow, basePoint, 0, i, true);
}
}

protected _drawHintForUpAction(bbox: TBoundingBox, progress = 0) {
let basePoint;
for (let i = 0; i < 4; i++) {
basePoint = this._getPointForRightPosition(bbox, i);
this._renderArrow(this._rightArrow, basePoint, 0, i, true);

basePoint = this._getPointForLeftPosition(bbox, i);
this._renderArrow(this._leftArrow, basePoint, 0, i, true);

basePoint = this._getPointForUpPosition(bbox, i);
this._renderArrow(this._upArrow, basePoint, progress, i);

basePoint = this._getPointForDownPosition(bbox, i);
this._renderArrow(this._downArrow, basePoint, 0, i, true);
}
}

protected _drawHintForCloserAction(bbox: TBoundingBox, progress = 0) {
let basePoint;
for (let i = 0; i < 4; i++) {
basePoint = this._getPointForRightPosition(bbox, i);
this._renderArrow(this._leftArrow, basePoint, progress, i);

basePoint = this._getPointForLeftPosition(bbox, i);
this._renderArrow(this._rightArrow, basePoint, progress, i);

basePoint = this._getPointForUpPosition(bbox, i);
this._renderArrow(this._downArrow, basePoint, progress, i);

basePoint = this._getPointForDownPosition(bbox, i);
this._renderArrow(this._upArrow, basePoint, progress, i);
}
}

protected _drawHintForFartherAction(bbox: TBoundingBox, progress = 0) {
let basePoint;
for (let i = 0; i < 4; i++) {
basePoint = this._getPointForRightPosition(bbox, i);
this._renderArrow(this._rightArrow, basePoint, progress, i);

basePoint = this._getPointForLeftPosition(bbox, i);
this._renderArrow(this._leftArrow, basePoint, progress, i);

basePoint = this._getPointForUpPosition(bbox, i);
this._renderArrow(this._upArrow, basePoint, progress, i);

basePoint = this._getPointForDownPosition(bbox, i);
this._renderArrow(this._downArrow, basePoint, progress, i);
}
}

protected _getPointForRightPosition(bbox: TBoundingBox, index: number) {
const offset = this._baseMargin + index * (this._gap + this._arrowResolution.width);
return {
x: bbox.xMax + offset,
y: bbox.yMin + bbox.height / 2 - this._halfArrowResolution.height,
};
}

protected _getPointForLeftPosition(bbox: TBoundingBox, index: number) {
const offset = this._baseMargin + index * (this._gap + this._arrowResolution.width);
return {
x: bbox.xMin - offset - this._arrowResolution.width,
y: bbox.yMin + bbox.height / 2 - this._halfArrowResolution.height,
};
}

protected _getPointForUpPosition(bbox: TBoundingBox, index: number) {
const offset = this._baseMargin + index * (this._gap + this._arrowResolution.width);
return {
x: bbox.xMin + bbox.width / 2 - this._halfArrowResolution.height,
y: bbox.yMin - offset - this._arrowResolution.width,
};
}

protected _getPointForDownPosition(bbox: TBoundingBox, index: number) {
const offset = this._baseMargin + index * (this._gap + this._arrowResolution.width);
return {
x: bbox.xMin + bbox.width / 2 - this._halfArrowResolution.height,
y: bbox.yMax + offset,
};
}

protected _renderArrow(arrow: Path2D, basePoint: Point, progress: number, arrowIndex: number, isDisabled = false) {
this._context.save();
this._setFillColor(progress, arrowIndex, isDisabled);
this._context.translate(basePoint.x, basePoint.y);
this._context.fill(arrow);
this._context.stroke(arrow);
this._context.restore();
}

protected _initLeftArrowPath() {
this._leftArrow.moveTo(this._arrowResolution.width, 0);
this._leftArrow.lineTo(this._halfArrowResolution.width, 0);
this._leftArrow.lineTo(0, this._halfArrowResolution.height);
this._leftArrow.lineTo(this._halfArrowResolution.width, this._arrowResolution.height);
this._leftArrow.lineTo(this._arrowResolution.width, this._arrowResolution.height);
this._leftArrow.lineTo(this._halfArrowResolution.width, this._halfArrowResolution.height);
this._leftArrow.lineTo(this._arrowResolution.width, 0);
}

protected _initRightArrowPath() {
this._rightArrow.moveTo(0, 0);
this._rightArrow.lineTo(this._halfArrowResolution.width, 0);
this._rightArrow.lineTo(this._arrowResolution.width, this._halfArrowResolution.height);
this._rightArrow.lineTo(this._halfArrowResolution.width, this._arrowResolution.height);
this._rightArrow.lineTo(0, this._arrowResolution.height);
this._rightArrow.lineTo(this._halfArrowResolution.width, this._halfArrowResolution.height);
this._rightArrow.lineTo(0, 0);
}

protected _initUpArrowPath() {
this._upArrow.moveTo(0, this._arrowResolution.width);
this._upArrow.lineTo(0, this._halfArrowResolution.width);
this._upArrow.lineTo(this._halfArrowResolution.height, 0);
this._upArrow.lineTo(this._arrowResolution.height, this._halfArrowResolution.width);
this._upArrow.lineTo(this._arrowResolution.height, this._arrowResolution.width);
this._upArrow.lineTo(this._halfArrowResolution.height, this._halfArrowResolution.width);
this._upArrow.lineTo(0, this._arrowResolution.width);
}

protected _initDownArrowPath() {
this._downArrow.moveTo(0, 0);
this._downArrow.lineTo(0, this._halfArrowResolution.width);
this._downArrow.lineTo(this._halfArrowResolution.height, this._arrowResolution.width);
this._downArrow.lineTo(this._arrowResolution.height, this._halfArrowResolution.width);
this._downArrow.lineTo(this._arrowResolution.height, 0);
this._downArrow.lineTo(this._halfArrowResolution.height, this._halfArrowResolution.width);
this._downArrow.lineTo(0, 0);
}

protected _setFillColor(progress: number, currentIndex: number, isDisabled: boolean) {
const options = { ...this._initialOptions };

if (!isDisabled && Math.floor(progress / 25) >= currentIndex + 1) {
options.fillStyle = this._successArrowFillColor;
}

if (isDisabled) {
options.fillStyle = this._disabledArrowFillColor;
options.strokeStyle = this._disabledArrowStrokeColor;
}

this.setContextOption(options);
this.applyContextOptions();
}

destroy(): void {
this._leftArrow = undefined!;
this._rightArrow = undefined!;
this._upArrow = undefined!;
this._downArrow = undefined!;
super.destroy();
}
}

Example of custom implementation

import tdvc, {BoundingBox, ComponentSettingsFromClient, MotionControlPattern } from '@tdvc/face-onboarding';

let lib: TDVRegistrationOnboarding | TDVAthorizationOnboarding;

class MotionControlDirectionHintsViaFilledFrame extends tdvc.UiKit.MotionControlDirectionHints {
draw(bbox: BoundingBox, command: MotionControlPattern | 'return', progress = 0) {
this._context.save();
this._setStyleByProgress(progress);
this._draw(bbox);
this._context.restore();
}

private _draw(bbox: BoundingBox) {
const resolution = {
width: bbox.xMax - bbox.xMin + this._context.lineWidth / 2,
height: bbox.yMax - bbox.yMin + this._context.lineWidth / 2,
};

const offset = Math.floor((resolution.width / 100) * 16);

this._context.beginPath();
this._context.moveTo(bbox.xMin, bbox.yMin + offset);

this._context.quadraticCurveTo(bbox.xMin, bbox.yMin, bbox.xMin + offset, bbox.yMin);
this._context.lineTo(bbox.xMax - offset, bbox.yMin);

this._context.quadraticCurveTo(bbox.xMax, bbox.yMin, bbox.xMax, bbox.yMin + offset);
this._context.lineTo(bbox.xMax, bbox.yMax - offset);

this._context.quadraticCurveTo(bbox.xMax, bbox.yMax, bbox.xMax - offset, bbox.yMax);
this._context.lineTo(bbox.xMin + offset, bbox.yMax);

this._context.quadraticCurveTo(bbox.xMin, bbox.yMax, bbox.xMin, bbox.yMax - offset);
this._context.closePath();

this._context.closePath();
this._context.stroke();
}

private _setStyleByProgress(progress = 0) {
this._context.lineWidth = 4;
this._context.strokeStyle = this._getStrokeColor(progress);
}

private _getStrokeColor(progress: number) {
let hue, saturation, lightness;

hue = progress;
saturation = 83;
lightness = 50;

return `hsl(${hue}, ${saturation}%, ${lightness}%)`;
}

destroy(): void {
super.destroy();
}
}

const config: ComponentSettingsFromClient = {
mountElement: 'app',
baseUrl: '/',
integrationId: '78fc6242-a812-4583-b508-078939cc747a',
uiKit: {
MotionControlDirectionHints: MotionControlDirectionHintsViaFilledFrame,
},
};

Device recommendations

The web-component performs many resource-intensive operations such as processing the video stream from the camera, searching for faces and analyzing the position of the face on the frame, visualizing the mask, and more. The combination of these operations imposes limitations on the technical characteristics of the device.

For correct operation the device must have the following characteristics:

  • Having at least one working and accessible webcam with a minimum resolution of 1280x720 and a minimum FPS of 25
  • Processor level MediaTek Dimensity 700 or higher
  • For smartphone required IOS 16 / Android 10 and higher

Devices that we use for testing:

  • POCO M4 5G
  • Samsung Galaxy S9
  • Samsung Galaxy A55
  • Samsung Galaxy Tab S9
  • iPhone 11 Pro
  • Iphone 15 Plus
  • Lenovo LOQ 15IRH8
  • Macbook Air 13 (m3, 16гб)
  • Macbook Pro 14" (m4, 16гб)

Browser support

  • Google Chrome
  • Mozila Firefox
  • Yandex Browser
  • Safari
  • Mi Browser
  • Samsung Browser

Version compatibility table

Server BAF@tdvc/face-onboarding
1.15.*-1.16.*1.16.*
1.15.*1.15.*
1.14.*1.14.*
1.13.*1.13.*
1.12.*1.12.*
1.10.*1.10.*-1.11.*
1.9.*1.9.*
1.8.*1.8.*
1.7.01.7.0
1.5.0-1.7.01.6.0
1.5.01.5.0
1.3.0-1.4.01.4.0-1.4.1
1.3.0-1.3.11.3.1
1.3.0-1.3.11.3.1
1.2.01.2.0-1.3.0
1.1.01.0.0-1.1.2
1.0.01.0.0