Skip to main content
Version: 2.4.0 (latest)

API-requests

info

All attributes specified in requests below are described in the Attributes section.

Face detection

The following Image API services perform face detection:

  • face-detector-face-fitter
  • face-detector-template-extractor
  • face-detector-liveness-estimator

Body detection

note

Input image size is no more than 4.7 MB.

The request is sent to body-detector service.

v1

API returns the following attributes with calculated values:

objects:

  • id
  • class
  • confidence
  • bbox
Request example:
{
"$image": "image in base64"
}
Response example:
{
"$image":"image in base64",
"objects": [
{
"id": 0,
"class": "body",
"confidence": 0.8266383409500122,
"bbox": [
0.648772656917572,
0.13773296773433685,
0.9848934412002563,
0.8240703344345093
]
},
{
"id": 1,
"class": "body",
"confidence": 0.7087612748146057,
"bbox": [
0.35164034366607666,
0.15803256630897522,
0.6833359003067017,
0.8854727745056152
]
}
]
}
v2

API returns the following attributes with calculated values:

objects:

  • id
  • class
  • confidence
  • bbox
Request example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
}
}
Response example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
},
"objects": [
{
"id": 0,
"class": "body",
"confidence": 0.8266383409500122,
"bbox": [
0.648772656917572,
0.13773296773433685,
0.9848934412002563,
0.8240703344345093
]
},
{
"id": 1,
"class": "body",
"confidence": 0.7087612748146057,
"bbox": [
0.35164034366607666,
0.15803256630897522,
0.6833359003067017,
0.8854727745056152
]
}
]
}

Gender estimation

The request is sent to gender-estimator service.

v1

API returns the following attributes with calculated values:

objects:

  • gender
Request example:
{
"$image": "image in base64",
"objects": [
{
"id": 0,
"class": "face",
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
]
}
]
}
Response example:
{
"$image":"image in base64",
"objects": [
{
"id": 0,
"class": "face",
"gender": "MALE",
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
]
}
]
}
v2

API returns the following attributes with calculated values:

objects:

  • gender
Request example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
},
"objects": [
{
"id": 0,
"class": "face",
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
]
}
]
}
Response example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
},
"objects": [
{
"id": 0,
"class": "face",
"gender": "male",
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
]
}
]
}
Errors

This service returns the following set of errors:

Errors:
  1. The transmitted image is not decoded.
{
"detail": "Failed to decode base64 string"
}

Age estimation

The request is sent to age-estimator service.

v1

API returns the following attributes with calculated values:

objects:

  • age
Request example:
{
"$image": "image in base64",
"objects": [
{
"id": 0,
"class": "face",
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
]
}
]
}
Response example:
{
"$image":"image in base64",
"objects": [
{
"id": 0,
"class": "face",
"age": "25",
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
]
}
]
}
v2

API returns the following attributes with calculated values:

objects:

  • age
Request example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
},
"objects": [
{
"id": 0,
"class": "face",
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
]
}
]
}
Response example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
},
"objects": [
{
"id": 0,
"class": "face",
"age": "25",
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
]
}
]
}

Emotion estimation

The request is sent to emotion-estimator service.

v1

API returns the following attributes with calculated values:

objects:

  • emotions
    • emotion
    • confidence
Request example:
{
"$image": "image in base64",
"objects": [
{
"id": 0,
"class": "face",
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
]
}
]
}
Response example:
{
"$image":"image in base64",
"objects": [
{
"id": 0,
"class": "face",
"emotions": [
{
"confidence": 0.10151619818478974,
"emotion": "ANGRY"
},
{
"confidence": 0.07763473911731263,
"emotion": "DISGUSTED"
},
{
"confidence": 0.20321173801223097,
"emotion": "SCARED"
},
{
"confidence": 0.08768639197580883,
"emotion": "HAPPY"
},
{
"confidence": 0.19000983487515088,
"emotion": "NEUTRAL"
},
{
"confidence": 0.08262699313446588,
"emotion": "SAD"
},
{
"confidence": 0.257314104700241,
"emotion": "SURPRISED"
}
],
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
]
}
]
}
v2

API returns the following attributes with calculated values:

objects:

  • emotions
    • emotion
    • confidence
Request example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
},
"objects": [
{
"id": 0,
"class": "face",
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
]
}
]
}
Response example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
},
"objects": [
{
"id": 0,
"class": "face",
"emotions": [
{
"confidence": 0.10151619818478974,
"emotion": "angry"
},
{
"confidence": 0.07763473911731263,
"emotion": "disgusted"
},
{
"confidence": 0.20321173801223097,
"emotion": "scared"
},
{
"confidence": 0.08768639197580883,
"emotion": "happy"
},
{
"confidence": 0.19000983487515088,
"emotion": "neutral"
},
{
"confidence": 0.08262699313446588,
"emotion": "sad"
},
{
"confidence": 0.257314104700241,
"emotion": "surprised"
}
],
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
]
}
]
}

Liveness estimation

The request is sent to face-detector-liveness-estimator or liveness-estimator services, used to detect that a face image belongs to a real person.

face-detector-liveness-estimator

note

Input image size is no more than 4.7 MB.

v1

API returns the following attributes with calculated values:

objects:

  • id
  • class
  • bbox
  • confidence
  • liveness:
    • confidence
    • value
Request example:
{
"$image": "image in base64"
}
Response example:
{
"$image": "image in base64",
"objects": [
{
"id": 0,
"class": "face",
"confidence": 0.8233476281166077,
"bbox": [
0.375,
0.12333333333333334,
0.7645833333333333,
0.42
],
"liveness": {
"confidence": 0.9989556074142456,
"value": "REAL"
}
}
]
}
v2

API returns the following attributes with calculated values:

objects:

  • id
  • class
  • bbox
  • confidence
  • liveness:
    • confidence
    • value
Request example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
}
}
Response example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
},
"objects": [
{
"id": 0,
"class": "face",
"confidence": 0.8233476281166077,
"bbox": [
0.375,
0.12333333333333334,
0.7645833333333333,
0.42
],
"liveness": {
"confidence": 0.9989556074142456,
"value": "real"
}
}
]
}
Errors

This service returns the following set of errors:

Errors:
  1. The transmitted image is not decoded.
{
"detail": "Failed to decode base64 string"
}

liveness-estimator

The request is sent to the liveness-estimator service, where liveness of all detected persons is calculated. In the request body, you must pass the values of face attributes obtained after processing the image by face-detector-face-fitter.

v1

API returns the following attributes with calculated values:

objects:

  • liveness:
    • confidence
    • value
Request example:
{
"$image": "image in base64",
"objects": [
{
"id": 0,
"class": "face",
"confidence": 0.970888078212738,
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
],
"fitter": {
"fitter_type": "fda",
"keypoints": [
174.70785522460938,
190.4671630859375,
0,
199.32272338867188,
182.19366455078125,
0,
227.97854614257812,
185.34304809570312,
0,
282.116455078125,
187.2840576171875,
0,
310.2395324707031,
185.80023193359375,
0,
334.5393371582031,
195.36509704589844,
0,
186.283935546875,
215.17318725585938,
0,
204.9008331298828,
213.86056518554688,
0,
223.6915283203125,
215.15101623535156,
0,
284.66351318359375,
216.84031677246094,
0,
303.9244689941406,
216.341552734375,
0,
322.2794189453125,
218.45713806152344,
0,
156.81298828125,
283.317626953125,
0,
229.1468963623047,
274.738525390625,
0,
253.1155242919922,
278.9350891113281,
0,
276.41357421875,
275.93316650390625,
0,
352.378662109375,
289.4745788574219,
0,
217.95767211914062,
318.26080322265625,
0,
252.4383087158203,
320.73089599609375,
0,
287.2714538574219,
319.39764404296875,
0,
252.5489501953125,
382.86297607421875,
0
],
"left_eye": [
204.9008331298828,
213.86056518554688
],
"right_eye": [
303.9244689941406,
216.341552734375
]
},
"angles": {
"yaw": -0.8864548802375793,
"roll": -0.08261164277791977,
"pitch": -16.430391311645508
}
}
]
}
Response example:
{
"$image":"image in base64",
"objects": [
{
"id": 0,
"class": "face",
"confidence": 0.895225465297699,
"bbox": [
0.10445103857566766,
0.05966162065894924,
0.7008902077151336,
0.9243098842386465
],
"liveness": {
"confidence": 0.999340832233429,
"value": "REAL"
},
"fitter": {
"fitter_type": "fda",
"keypoints": [
174.70785522460938,
190.4671630859375,
0,
199.32272338867188,
182.19366455078125,
0,
227.97854614257812,
185.34304809570312,
0,
282.116455078125,
187.2840576171875,
0,
310.2395324707031,
185.80023193359375,
0,
334.5393371582031,
195.36509704589844,
0,
186.283935546875,
215.17318725585938,
0,
204.9008331298828,
213.86056518554688,
0,
223.6915283203125,
215.15101623535156,
0,
284.66351318359375,
216.84031677246094,
0,
303.9244689941406,
216.341552734375,
0,
322.2794189453125,
218.45713806152344,
0,
156.81298828125,
283.317626953125,
0,
229.1468963623047,
274.738525390625,
0,
253.1155242919922,
278.9350891113281,
0,
276.41357421875,
275.93316650390625,
0,
352.378662109375,
289.4745788574219,
0,
217.95767211914062,
318.26080322265625,
0,
252.4383087158203,
320.73089599609375,
0,
287.2714538574219,
319.39764404296875,
0,
252.5489501953125,
382.86297607421875,
0
],
"left_eye": [
204.9008331298828,
213.86056518554688
],
"right_eye": [
303.9244689941406,
216.341552734375
]
},
"angles": {
"yaw": -0.8864548802375793,
"roll": -0.08261164277791977,
"pitch": -16.430391311645508
}
}
]
}
v2

API returns the following attributes with calculated values:

objects:

  • liveness:
    • confidence
    • value
Request example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
},
"objects": [
{
"id": 0,
"class": "face",
"confidence": 0.970888078212738,
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
],
"keypoints": {
"left_eye_brow_left": {
"proj": [
0.3412262797355652,
0.3720061779022217
]
},
"left_eye_brow_up": {
"proj": [
0.38930219411849976,
0.35584700107574463
]
},
"left_eye_brow_right": {
"proj": [
0.4452705979347229,
0.36199814081192017
]
},
"right_eye_brow_left": {
"proj": [
0.5510087013244629,
0.36578917503356934
]
},
"right_eye_brow_up": {
"proj": [
0.605936586856842,
0.3628910779953003
]
},
"right_eye_brow_right": {
"proj": [
0.6533971428871155,
0.3815724551677704
]
},
"left_eye_left": {
"proj": [
0.36383581161499023,
0.42026013135910034
]
},
"left_eye": {
"proj": [
0.40019693970680237,
0.41769641637802124
]
},
"left_eye_right": {
"proj": [
0.43689751625061035,
0.420216828584671
]
},
"right_eye_left": {
"proj": [
0.5559834241867065,
0.42351624369621277
]
},
"right_eye": {
"proj": [
0.5936024785041809,
0.42254209518432617
]
},
"right_eye_right": {
"proj": [
0.6294519901275635,
0.42667409777641296
]
},
"left_ear_bottom": {
"proj": [
0.3062753677368164,
0.5533547401428223
]
},
"nose_left": {
"proj": [
0.44755253195762634,
0.5365986824035645
]
},
"nose": {
"proj": [
0.49436625838279724,
0.5447950959205627
]
},
"nose_right": {
"proj": [
0.5398702621459961,
0.5389319658279419
]
},
"right_ear_bottom": {
"proj": [
0.688239574432373,
0.5653800368309021
]
},
"mouth_left": {
"proj": [
0.42569857835769653,
0.6216031312942505
]
},
"mouth": {
"proj": [
0.49304357171058655,
0.6264275312423706
]
},
"mouth_right": {
"proj": [
0.5610770583152771,
0.6238235235214233
]
},
"chin": {
"proj": [
0.4932596683502197,
0.7477792501449585
]
},
"fitter_type": "fda"
},
"pose": {
"yaw": -0.8864548802375793,
"roll": -0.08261164277791977,
"pitch": -16.430391311645508
}
}
]
}
Response example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
},
"objects": [
{
"confidence": 0.995682954788208,
"id": 0,
"class": "face",
"bbox": [
0.306640625,
0.361328125,
0.69921875,
0.748046875
],
"liveness": {
"confidence": 0.999340832233429,
"value": "real"
},
"keypoints": {
"left_eye_brow_left": {
"proj": [
0.3412262797355652,
0.3720061779022217
]
},
"left_eye_brow_up": {
"proj": [
0.38930219411849976,
0.35584700107574463
]
},
"left_eye_brow_right": {
"proj": [
0.4452705979347229,
0.36199814081192017
]
},
"right_eye_brow_left": {
"proj": [
0.5510087013244629,
0.36578917503356934
]
},
"right_eye_brow_up": {
"proj": [
0.605936586856842,
0.3628910779953003
]
},
"right_eye_brow_right": {
"proj": [
0.6533971428871155,
0.3815724551677704
]
},
"left_eye_left": {
"proj": [
0.36383581161499023,
0.42026013135910034
]
},
"left_eye": {
"proj": [
0.40019693970680237,
0.41769641637802124
]
},
"left_eye_right": {
"proj": [
0.43689751625061035,
0.420216828584671
]
},
"right_eye_left": {
"proj": [
0.5559834241867065,
0.42351624369621277
]
},
"right_eye": {
"proj": [
0.5936024785041809,
0.42254209518432617
]
},
"right_eye_right": {
"proj": [
0.6294519901275635,
0.42667409777641296
]
},
"left_ear_bottom": {
"proj": [
0.3062753677368164,
0.5533547401428223
]
},
"nose_left": {
"proj": [
0.44755253195762634,
0.5365986824035645
]
},
"nose": {
"proj": [
0.49436625838279724,
0.5447950959205627
]
},
"nose_right": {
"proj": [
0.5398702621459961,
0.5389319658279419
]
},
"right_ear_bottom": {
"proj": [
0.688239574432373,
0.5653800368309021
]
},
"mouth_left": {
"proj": [
0.42569857835769653,
0.6216031312942505
]
},
"mouth": {
"proj": [
0.49304357171058655,
0.6264275312423706
]
},
"mouth_right": {
"proj": [
0.5610770583152771,
0.6238235235214233
]
},
"chin": {
"proj": [
0.4932596683502197,
0.7477792501449585
]
},
"fitter_type": "fda"
},
"pose": {
"yaw": -0.8864548802375793,
"roll": -0.08261164277791977,
"pitch": -16.430391311645508
}
}
]
}

Face mask check

note

In this version of Image API, it is recommended to use the quality-assessment-estimator service to determine the presence/absence of a mask on a face, which currently shows more accurate results compared to the mask-estimator service.

The request is sent to mask-estimator service which determines if a person in an image is wearing a medical mask.

v1

API returns the following attributes with calculated values:

objects:

  • mask:
    • value
    • confidence
Request example:
{
"$image": "image in base64",
"objects": [
{
"id": 0,
"class": "face",
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
]
}
]
}
Response example:
{
"$image":"image in base64",
"objects": [
{
"id": 0,
"class": "face",
"mask": {
"confidence": 0.07230597734451294,
"value": false
},
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
]
}
]
}
v2

API returns the following attributes with calculated values:

objects:

  • has_medical_mask:
    • value
    • confidence
Request example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
},
"objects": [
{
"id": 0,
"class": "face",
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
]
}
]
}
Response example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
},
"objects": [
{
"id": 0,
"class": "face",
"has_medical_mask": {
"confidence": 0.07230597734451294,
"value": false
},
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
]
}
]
}

Fitting of face landmarks

note

Input image size is no more than 4.7 MB.

This request is sent to face-detector-face-fitter service, used to detect faces, determine face landmarks and calculate head rotation angles.

v1

API returns the following attributes with calculated values:

objects:

  • id
  • class
  • bbox
  • confidence
  • fitter:
    • fitter_type
    • keypoints
    • left_eye
    • right_eye
  • angles:
    • roll
    • pitch
    • yaw
Request example:
{
"$image": "image in base64"
}
Response example:
{
"$image":"image in base64",
"objects": [
{
"id": 0,
"class": "face",
"confidence": 0.895225465297699,
"bbox": [
0.10445103857566766,
0.05966162065894924,
0.7008902077151336,
0.9243098842386465
],
"fitter": {
"fitter_type": "fda",
"keypoints": [
344.24078369140625,
379.23858642578125,
0,
443.0493469238281,
364.8091125488281,
0,
547.5462646484375,
384.35833740234375,
0,
724.5175170898438,
385.01220703125,
0,
816.4994506835938,
366.4952697753906,
0,
899.7161865234375,
380.7967224121094,
0,
391.20654296875,
461.12066650390625,
0,
461.524169921875,
459.44287109375,
0,
531.2512817382812,
467.7398681640625,
0,
721.8792724609375,
468.227294921875,
0,
784.9144897460938,
461.4508056640625,
0,
854.609130859375,
465.002685546875,
0,
250.21035766601562,
657.1244506835938,
0,
559.1598510742188,
666.7738647460938,
0,
641.8836059570312,
678.353515625,
0,
710.0083618164062,
670.3438110351562,
0,
939.4479370117188,
656.3207397460938,
0,
509.8494873046875,
816.0798950195312,
0,
634.861083984375,
823.5408325195312,
0,
748.5276489257812,
813.4531860351562,
0,
633.5501098632812,
1033.822509765625,
0
],
"left_eye": [
461.524169921875,
459.44287109375
],
"right_eye": [
784.9144897460938,
461.4508056640625
]
},
"angles": {
"yaw": 6.648662090301514,
"roll": 0.3107689917087555,
"pitch": -23.410654067993164
}
}
]
}
v2

API returns the following attributes with calculated values:

objects:

  • id
  • class
  • bbox
  • confidence
  • keypoints:
    • fitter_type
    • left_eye_brow_left
    • left_eye_brow_up
    • left_eye_brow_right
    • right_eye_brow_left
    • right_eye_brow_up
    • right_eye_brow_right
    • left_eye_left
    • left_eye
    • left_eye_right
    • right_eye_left
    • right_eye
    • right_eye_right
    • left_ear_bottom
    • nose_left
    • nose
    • nose_right
    • right_ear_bottom
    • mouth_left
    • mouth
    • mouth_right
    • chin
  • pose:
    • roll
    • pitch
    • yaw
Request example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
}
}
Response example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
},
"objects": [
{
"id": 0,
"class": "face",
"confidence": 0.970888078212738,
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
],
"keypoints": {
"left_eye_brow_left": {
"proj": [
0.3412262797355652,
0.3720061779022217
]
},
"left_eye_brow_up": {
"proj": [
0.38930219411849976,
0.35584700107574463
]
},
"left_eye_brow_right": {
"proj": [
0.4452705979347229,
0.36199814081192017
]
},
"right_eye_brow_left": {
"proj": [
0.5510087013244629,
0.36578917503356934
]
},
"right_eye_brow_up": {
"proj": [
0.605936586856842,
0.3628910779953003
]
},
"right_eye_brow_right": {
"proj": [
0.6533971428871155,
0.3815724551677704
]
},
"left_eye_left": {
"proj": [
0.36383581161499023,
0.42026013135910034
]
},
"left_eye": {
"proj": [
0.40019693970680237,
0.41769641637802124
]
},
"left_eye_right": {
"proj": [
0.43689751625061035,
0.420216828584671
]
},
"right_eye_left": {
"proj": [
0.5559834241867065,
0.42351624369621277
]
},
"right_eye": {
"proj": [
0.5936024785041809,
0.42254209518432617
]
},
"right_eye_right": {
"proj": [
0.6294519901275635,
0.42667409777641296
]
},
"left_ear_bottom": {
"proj": [
0.3062753677368164,
0.5533547401428223
]
},
"nose_left": {
"proj": [
0.44755253195762634,
0.5365986824035645
]
},
"nose": {
"proj": [
0.49436625838279724,
0.5447950959205627
]
},
"nose_right": {
"proj": [
0.5398702621459961,
0.5389319658279419
]
},
"right_ear_bottom": {
"proj": [
0.688239574432373,
0.5653800368309021
]
},
"mouth_left": {
"proj": [
0.42569857835769653,
0.6216031312942505
]
},
"mouth": {
"proj": [
0.49304357171058655,
0.6264275312423706
]
},
"mouth_right": {
"proj": [
0.5610770583152771,
0.6238235235214233
]
},
"chin": {
"proj": [
0.4932596683502197,
0.7477792501449585
]
},
"fitter_type": "fda"
},
"pose": {
"yaw": -0.8864548802375793,
"roll": -0.08261164277791977,
"pitch": -16.430391311645508
}
}
]
}

Image quality assessment

This request is sent to quality-assessment-estimator service. In the request body pass the values of face attributes obtained after processing the image by face-detector-face-fitter.

v1

API returns the following attributes with calculated values:

objects:

  • quality:
    • qaa:
      • totalScore
      • isSharp
      • sharpnessScore
      • isEvenlyIlluminated
      • illuminationScore
      • noFlare
      • isLeftEyeOpened
      • leftEyeOpennessScore
      • isRightEyeOpened
      • rightEyeOpennessScore
      • isBackgroundUniform
      • backgroundUniformityScore
      • isDynamicRangeAcceptable
      • dynamicRangeScore
      • isEyesDistanceAcceptable
      • eyesDistance
      • isNotNoisy
      • noiseScore
      • isMarginsAcceptable
      • marginInnerDeviation
      • marginOuterDeviation
      • isNeutralEmotion
      • neutralEmotionScore
      • notMasked
      • notMaskedScore
      • hasWatermark
      • watermarkScore
      • isRotationAcceptable
      • maxRotationDeviation
Request example:
{
"$image": "image in base64",
"objects": [
{
"id": 0,
"class": "face",
"confidence": 0.895225465297699,
"bbox": [
0.10445103857566766,
0.05966162065894924,
0.7008902077151336,
0.9243098842386465
],
"fitter": {
"fitter_type": "fda",
"keypoints": [
344.24078369140625,
379.23858642578125,
0,
443.0493469238281,
364.8091125488281,
0,
547.5462646484375,
384.35833740234375,
0,
724.5175170898438,
385.01220703125,
0,
816.4994506835938,
366.4952697753906,
0,
899.7161865234375,
380.7967224121094,
0,
391.20654296875,
461.12066650390625,
0,
461.524169921875,
459.44287109375,
0,
531.2512817382812,
467.7398681640625,
0,
721.8792724609375,
468.227294921875,
0,
784.9144897460938,
461.4508056640625,
0,
854.609130859375,
465.002685546875,
0,
250.21035766601562,
657.1244506835938,
0,
559.1598510742188,
666.7738647460938,
0,
641.8836059570312,
678.353515625,
0,
710.0083618164062,
670.3438110351562,
0,
939.4479370117188,
656.3207397460938,
0,
509.8494873046875,
816.0798950195312,
0,
634.861083984375,
823.5408325195312,
0,
748.5276489257812,
813.4531860351562,
0,
633.5501098632812,
1033.822509765625,
0
],
"left_eye": [
461.524169921875,
459.44287109375
],
"right_eye": [
784.9144897460938,
461.4508056640625
]
},
"angles": {
"yaw": 6.648662090301514,
"roll": 0.3107689917087555,
"pitch": -23.410654067993164
}
}
]
}
Response example:
{
"$image": "string",
"objects": [
{
"id": 0,
"class": "face",
"confidence": 0.69044026635,
"bbox": [
0.42242398858070374,
0.05838850140571594,
0.5360375642776489,
0.17216356098651886
],
"quality": {
"qaa": {
"totalScore": 0,
"isSharp": true,
"sharpnessScore": 0,
"isEvenlyIlluminated": true,
"illuminationScore": 0,
"noFlare": true,
"isLeftEyeOpened": true,
"leftEyeOpennessScore": 0,
"isRightEyeOpened": true,
"rightEyeOpennessScore": 0,
"isRotationAcceptable": true,
"maxRotationDeviation": 0,
"notMasked": true,
"notMaskedScore": 0,
"isNeutralEmotion": true,
"neutralEmotionScore": 0,
"isEyesDistanceAcceptable": true,
"eyesDistance": 0,
"isMarginsAcceptable": true,
"marginOuterDeviation": 0,
"marginInnerDeviation": 0,
"isNotNoisy": true,
"noiseScore": 0,
"watermarkScore": 0,
"hasWatermark": true,
"dynamicRangeScore": 0,
"isDynamicRangeAcceptable": true,
"backgroundUniformityScore": 0,
"isBackgroundUniform": true
}
},
"fitter": {
"fitter_type": "fda",
"keypoints": [
344.24078369140625,
379.23858642578125,
0,
443.0493469238281,
364.8091125488281,
0,
547.5462646484375,
384.35833740234375,
0,
724.5175170898438,
385.01220703125,
0,
816.4994506835938,
366.4952697753906,
0,
899.7161865234375,
380.7967224121094,
0,
391.20654296875,
461.12066650390625,
0,
461.524169921875,
459.44287109375,
0,
531.2512817382812,
467.7398681640625,
0,
721.8792724609375,
468.227294921875,
0,
784.9144897460938,
461.4508056640625,
0,
854.609130859375,
465.002685546875,
0,
250.21035766601562,
657.1244506835938,
0,
559.1598510742188,
666.7738647460938,
0,
641.8836059570312,
678.353515625,
0,
710.0083618164062,
670.3438110351562,
0,
939.4479370117188,
656.3207397460938,
0,
509.8494873046875,
816.0798950195312,
0,
634.861083984375,
823.5408325195312,
0,
748.5276489257812,
813.4531860351562,
0,
633.5501098632812,
1033.822509765625,
0
],
"left_eye": [
461.524169921875,
459.44287109375
],
"right_eye": [
784.9144897460938,
461.4508056640625
]
},
"angles": {
"yaw": 6.648662090301514,
"roll": 0.3107689917087555,
"pitch": -23.410654067993164
}
}
]
}
v2

API returns the following attributes with calculated values:

objects:

  • quality:
    • total_score
    • is_sharp
    • sharpness_score
    • is_evenly_illuminated
    • illumination_score
    • no_flare
    • is_left_eye_opened
    • left_eye_openness_score
    • is_right_eye_opened
    • right_eye_openness_score
    • is_background_uniform
    • background_uniformity_score
    • is_dynamic_range_acceptable
    • dynamic_range_score
    • is_eyes_distance_acceptable
    • eyes_distance
    • is_not_noisy
    • noise_score
    • is_margins_acceptable
    • margin_inner_deviation
    • margin_outer_deviation
    • is_neutral_emotion
    • neutral_emotion_score
    • not_masked
    • not_masked_score
    • has_watermark
    • watermark_score
    • is_rotation_acceptable
    • max_rotation_deviation
Request example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
},
"objects": [
{
"pose": {
"yaw": -0.8864548802375793,
"roll": -0.08261164277791977,
"pitch": -16.430391311645508
},
"keypoints": {
"left_eye_brow_left": {
"proj": [
0.3412262797355652,
0.3720061779022217
]
},
"left_eye_brow_up": {
"proj": [
0.38930219411849976,
0.35584700107574463
]
},
"left_eye_brow_right": {
"proj": [
0.4452705979347229,
0.36199814081192017
]
},
"right_eye_brow_left": {
"proj": [
0.5510087013244629,
0.36578917503356934
]
},
"right_eye_brow_up": {
"proj": [
0.605936586856842,
0.3628910779953003
]
},
"right_eye_brow_right": {
"proj": [
0.6533971428871155,
0.3815724551677704
]
},
"left_eye_left": {
"proj": [
0.36383581161499023,
0.42026013135910034
]
},
"left_eye": {
"proj": [
0.40019693970680237,
0.41769641637802124
]
},
"left_eye_right": {
"proj": [
0.43689751625061035,
0.420216828584671
]
},
"right_eye_left": {
"proj": [
0.5559834241867065,
0.42351624369621277
]
},
"right_eye": {
"proj": [
0.5936024785041809,
0.42254209518432617
]
},
"right_eye_right": {
"proj": [
0.6294519901275635,
0.42667409777641296
]
},
"left_ear_bottom": {
"proj": [
0.3062753677368164,
0.5533547401428223
]
},
"nose_left": {
"proj": [
0.44755253195762634,
0.5365986824035645
]
},
"nose": {
"proj": [
0.49436625838279724,
0.5447950959205627
]
},
"nose_right": {
"proj": [
0.5398702621459961,
0.5389319658279419
]
},
"right_ear_bottom": {
"proj": [
0.688239574432373,
0.5653800368309021
]
},
"mouth_left": {
"proj": [
0.42569857835769653,
0.6216031312942505
]
},
"mouth": {
"proj": [
0.49304357171058655,
0.6264275312423706
]
},
"mouth_right": {
"proj": [
0.5610770583152771,
0.6238235235214233
]
},
"chin": {
"proj": [
0.4932596683502197,
0.7477792501449585
]
},
"fitter_type": "fda"
},
"id": 0,
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
],
"confidence": 0.970888078212738,
"class": "face"
}
]
}
Response example:
{
"$image": "string",
"objects": [
{
"quality": {
"total_score": 0.91,
"is_sharp": true,
"sharpness_score": 0.99,
"is_evenly_illuminated": true,
"illumination_score": 0.77,
"no_flare": true,
"is_left_eye_opened": true,
"left_eye_openness_score": 0.99,
"is_right_eye_opened": true,
"right_eye_openness_score": 0.99,
"is_rotation_acceptable": true,
"max_rotation_deviation": -8,
"not_masked": true,
"not_masked_score": 1,
"is_neutral_emotion": true,
"neutral_emotion_score": 0.92,
"is_eyes_distance_acceptable": true,
"eyes_distance": 99,
"is_margins_acceptable": false,
"margin_outer_deviation": 0,
"margin_inner_deviation": 25,
"is_not_noisy": true,
"noise_score": 1,
"watermark_score": 0.02,
"has_watermark": false,
"dynamic_range_score": 2.47,
"is_dynamic_range_acceptable": true,
"background_uniformity_score": 0.67,
"is_background_uniform": false
},
"pose": {
"yaw": -0.8864548802375793,
"roll": -0.08261164277791977,
"pitch": -16.430391311645508
},
"keypoints": {
"left_eye_brow_left": {
"proj": [
0.3412262797355652,
0.3720061779022217
]
},
"left_eye_brow_up": {
"proj": [
0.38930219411849976,
0.35584700107574463
]
},
"left_eye_brow_right": {
"proj": [
0.4452705979347229,
0.36199814081192017
]
},
"right_eye_brow_left": {
"proj": [
0.5510087013244629,
0.36578917503356934
]
},
"right_eye_brow_up": {
"proj": [
0.605936586856842,
0.3628910779953003
]
},
"right_eye_brow_right": {
"proj": [
0.6533971428871155,
0.3815724551677704
]
},
"left_eye_left": {
"proj": [
0.36383581161499023,
0.42026013135910034
]
},
"left_eye": {
"proj": [
0.40019693970680237,
0.41769641637802124
]
},
"left_eye_right": {
"proj": [
0.43689751625061035,
0.420216828584671
]
},
"right_eye_left": {
"proj": [
0.5559834241867065,
0.42351624369621277
]
},
"right_eye": {
"proj": [
0.5936024785041809,
0.42254209518432617
]
},
"right_eye_right": {
"proj": [
0.6294519901275635,
0.42667409777641296
]
},
"left_ear_bottom": {
"proj": [
0.3062753677368164,
0.5533547401428223
]
},
"nose_left": {
"proj": [
0.44755253195762634,
0.5365986824035645
]
},
"nose": {
"proj": [
0.49436625838279724,
0.5447950959205627
]
},
"nose_right": {
"proj": [
0.5398702621459961,
0.5389319658279419
]
},
"right_ear_bottom": {
"proj": [
0.688239574432373,
0.5653800368309021
]
},
"mouth_left": {
"proj": [
0.42569857835769653,
0.6216031312942505
]
},
"mouth": {
"proj": [
0.49304357171058655,
0.6264275312423706
]
},
"mouth_right": {
"proj": [
0.5610770583152771,
0.6238235235214233
]
},
"chin": {
"proj": [
0.4932596683502197,
0.7477792501449585
]
},
"fitter_type": "fda"
},
"id": 0,
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
],
"confidence": 0.970888078212738,
"class": "face"
}
]
}

Extraction of biometric templates

To extract a biometric template, you can use one of the services: face-detector-template-extractor or template-extractor.

face-detector-template-extractor

note

Input image size is no more than 4.7 MB.

The request is sent to the face-detector-template-extractor service, which detects faces in the image and generates biometric templates.

v1

API returns the following attributes with calculated values:

objects:

  • id
  • class
  • confidence
  • bbox
  • $template
  • template_size
Request example:
{
"$image": "image in base64"
}
Response example:
{
"$image":"image in base64",
"objects": [
{
"id": 0,
"class": "face",
"confidence": 0.895225465297699,
"bbox": [
0.10445103857566766,
0.05966162065894924,
0.7008902077151336,
0.9243098842386465
],
"$template": "template in base64",
"template_size": 74
}
]
}
v2

API returns the following attributes with calculated values:

objects:

  • id
  • class
  • confidence
  • bbox
  • template:
    • _face_template_extractor_1000_12:
      • blob
      • format
      • dtype
      • shape
Request example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
}
}
Response example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
},
"objects": [
{
"id": 0,
"class": "face",
"confidence": 0.895225465297699,
"bbox": [
0.10445103857566766,
0.05966162065894924,
0.7008902077151336,
0.9243098842386465
],
"template": {
"_face_template_extractor_1000_12": {
"blob": "template in base64",
"format": "NDARRAY",
"dtype": "uint8",
"shape": [
296
]
}
}
}
]
}
Errors

This service returns the following set of errors:

Errors:
  1. The transmitted image is not decoded.
{
"detail": "Failed to decode base64 string"
}

template-extractor

The request is sent to the template-extractor service, which generates biometric templates for all detected faces. In the request body pass the values of face attributes obtained after processing the image by face-detector-face-fitter.

v1

API returns the following attributes with calculated values:

objects:

  • $template
  • template_size
Request example:
{
"$image": "image in base64",
"objects": [
{
"id": 0,
"class": "face",
"confidence": 0.970888078212738,
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
],
"fitter": {
"fitter_type": "fda",
"keypoints": [
174.70785522460938,
190.4671630859375,
0,
199.32272338867188,
182.19366455078125,
0,
227.97854614257812,
185.34304809570312,
0,
282.116455078125,
187.2840576171875,
0,
310.2395324707031,
185.80023193359375,
0,
334.5393371582031,
195.36509704589844,
0,
186.283935546875,
215.17318725585938,
0,
204.9008331298828,
213.86056518554688,
0,
223.6915283203125,
215.15101623535156,
0,
284.66351318359375,
216.84031677246094,
0,
303.9244689941406,
216.341552734375,
0,
322.2794189453125,
218.45713806152344,
0,
156.81298828125,
283.317626953125,
0,
229.1468963623047,
274.738525390625,
0,
253.1155242919922,
278.9350891113281,
0,
276.41357421875,
275.93316650390625,
0,
352.378662109375,
289.4745788574219,
0,
217.95767211914062,
318.26080322265625,
0,
252.4383087158203,
320.73089599609375,
0,
287.2714538574219,
319.39764404296875,
0,
252.5489501953125,
382.86297607421875,
0
],
"left_eye": [
204.9008331298828,
213.86056518554688
],
"right_eye": [
303.9244689941406,
216.341552734375
]
},
"angles": {
"yaw": -0.8864548802375793,
"roll": -0.08261164277791977,
"pitch": -16.430391311645508
}
}
]
}
Response example:
{
"$image":"image in base64",
"objects": [
{
"id": 0,
"class": "face",
"confidence": 0.895225465297699,
"bbox": [
0.10445103857566766,
0.05966162065894924,
0.7008902077151336,
0.9243098842386465
],"fitter": {
"fitter_type": "fda",
"keypoints": [
174.70785522460938,
190.4671630859375,
0,
199.32272338867188,
182.19366455078125,
0,
227.97854614257812,
185.34304809570312,
0,
282.116455078125,
187.2840576171875,
0,
310.2395324707031,
185.80023193359375,
0,
334.5393371582031,
195.36509704589844,
0,
186.283935546875,
215.17318725585938,
0,
204.9008331298828,
213.86056518554688,
0,
223.6915283203125,
215.15101623535156,
0,
284.66351318359375,
216.84031677246094,
0,
303.9244689941406,
216.341552734375,
0,
322.2794189453125,
218.45713806152344,
0,
156.81298828125,
283.317626953125,
0,
229.1468963623047,
274.738525390625,
0,
253.1155242919922,
278.9350891113281,
0,
276.41357421875,
275.93316650390625,
0,
352.378662109375,
289.4745788574219,
0,
217.95767211914062,
318.26080322265625,
0,
252.4383087158203,
320.73089599609375,
0,
287.2714538574219,
319.39764404296875,
0,
252.5489501953125,
382.86297607421875,
0
],
"left_eye": [
204.9008331298828,
213.86056518554688
],
"right_eye": [
303.9244689941406,
216.341552734375
]
},
"angles": {
"yaw": -0.8864548802375793,
"roll": -0.08261164277791977,
"pitch": -16.430391311645508
},
"$template": "template in base64",
"template_size": 74
}
]
}
v2

API returns the following attributes with calculated values:

objects:

  • template:
    • _face_template_extractor_1000_12:
      • blob
      • format
      • dtype
      • shape
Request example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
},
"objects": [
{
"id": 0,
"class": "face",
"confidence": 0.970888078212738,
"bbox": [
0.267578125,
0.2109375,
0.763671875,
0.71484375
],
"keypoints": {
"left_eye_brow_left": {
"proj": [
0.3412262797355652,
0.3720061779022217
]
},
"left_eye_brow_up": {
"proj": [
0.38930219411849976,
0.35584700107574463
]
},
"left_eye_brow_right": {
"proj": [
0.4452705979347229,
0.36199814081192017
]
},
"right_eye_brow_left": {
"proj": [
0.5510087013244629,
0.36578917503356934
]
},
"right_eye_brow_up": {
"proj": [
0.605936586856842,
0.3628910779953003
]
},
"right_eye_brow_right": {
"proj": [
0.6533971428871155,
0.3815724551677704
]
},
"left_eye_left": {
"proj": [
0.36383581161499023,
0.42026013135910034
]
},
"left_eye": {
"proj": [
0.40019693970680237,
0.41769641637802124
]
},
"left_eye_right": {
"proj": [
0.43689751625061035,
0.420216828584671
]
},
"right_eye_left": {
"proj": [
0.5559834241867065,
0.42351624369621277
]
},
"right_eye": {
"proj": [
0.5936024785041809,
0.42254209518432617
]
},
"right_eye_right": {
"proj": [
0.6294519901275635,
0.42667409777641296
]
},
"left_ear_bottom": {
"proj": [
0.3062753677368164,
0.5533547401428223
]
},
"nose_left": {
"proj": [
0.44755253195762634,
0.5365986824035645
]
},
"nose": {
"proj": [
0.49436625838279724,
0.5447950959205627
]
},
"nose_right": {
"proj": [
0.5398702621459961,
0.5389319658279419
]
},
"right_ear_bottom": {
"proj": [
0.688239574432373,
0.5653800368309021
]
},
"mouth_left": {
"proj": [
0.42569857835769653,
0.6216031312942505
]
},
"mouth": {
"proj": [
0.49304357171058655,
0.6264275312423706
]
},
"mouth_right": {
"proj": [
0.5610770583152771,
0.6238235235214233
]
},
"chin": {
"proj": [
0.4932596683502197,
0.7477792501449585
]
},
"fitter_type": "fda"
},
"pose": {
"yaw": -0.8864548802375793,
"roll": -0.08261164277791977,
"pitch": -16.430391311645508
}
}
]
}
Response example:
{
"_image": {
"blob": "image in base64",
"format": "IMAGE"
},
"objects": [
{
"template": {
"_face_template_extractor_1000_12": {
"blob": "template in base64",
"format": "NDARRAY",
"dtype": "uint8",
"shape": [
296
]
}
},
"confidence": 0.995682954788208,
"id": 0,
"class": "face",
"bbox": [
0.306640625,
0.361328125,
0.69921875,
0.748046875
],
"keypoints": {
"left_eye_brow_left": {
"proj": [
0.3412262797355652,
0.3720061779022217
]
},
"left_eye_brow_up": {
"proj": [
0.38930219411849976,
0.35584700107574463
]
},
"left_eye_brow_right": {
"proj": [
0.4452705979347229,
0.36199814081192017
]
},
"right_eye_brow_left": {
"proj": [
0.5510087013244629,
0.36578917503356934
]
},
"right_eye_brow_up": {
"proj": [
0.605936586856842,
0.3628910779953003
]
},
"right_eye_brow_right": {
"proj": [
0.6533971428871155,
0.3815724551677704
]
},
"left_eye_left": {
"proj": [
0.36383581161499023,
0.42026013135910034
]
},
"left_eye": {
"proj": [
0.40019693970680237,
0.41769641637802124
]
},
"left_eye_right": {
"proj": [
0.43689751625061035,
0.420216828584671
]
},
"right_eye_left": {
"proj": [
0.5559834241867065,
0.42351624369621277
]
},
"right_eye": {
"proj": [
0.5936024785041809,
0.42254209518432617
]
},
"right_eye_right": {
"proj": [
0.6294519901275635,
0.42667409777641296
]
},
"left_ear_bottom": {
"proj": [
0.3062753677368164,
0.5533547401428223
]
},
"nose_left": {
"proj": [
0.44755253195762634,
0.5365986824035645
]
},
"nose": {
"proj": [
0.49436625838279724,
0.5447950959205627
]
},
"nose_right": {
"proj": [
0.5398702621459961,
0.5389319658279419
]
},
"right_ear_bottom": {
"proj": [
0.688239574432373,
0.5653800368309021
]
},
"mouth_left": {
"proj": [
0.42569857835769653,
0.6216031312942505
]
},
"mouth": {
"proj": [
0.49304357171058655,
0.6264275312423706
]
},
"mouth_right": {
"proj": [
0.5610770583152771,
0.6238235235214233
]
},
"chin": {
"proj": [
0.4932596683502197,
0.7477792501449585
]
},
"fitter_type": "fda"
},
"pose": {
"yaw": -0.8864548802375793,
"roll": -0.08261164277791977,
"pitch": -16.430391311645508
}
}
]
}

Face verification

This request is sent to verify-matcher service which verifies faces by camparing their biometric templates. In the request body pass the values of face attributes obtained after processing the image by face-detector-template-extractor.

v1

API returns the following attributes with calculated values:

verification:

  • distance
  • fa_r
  • fr_r
  • score
Request example:
{
"objects": [
{
"id": 0,
"class": "face",
"confidence": 0.9171707630157471,
"bbox": [
0.14427860696517414,
0.21912350597609562,
0.8656716417910447,
0.796812749003984
],
"$template": "template in base64",
"template_size": 74
},
{
"id": 0,
"class": "face",
"confidence": 0.8453116416931152,
"bbox": [
0.16477272727272727,
0.22272727272727272,
0.875,
0.7954545454545454
],
"$template": "template in base64",
"template_size": 74
}
]
}
Response example:
{
"objects": [
{
"bbox": [
0.14427860696517414,
0.21912350597609562,
0.8656716417910447,
0.796812749003984
],
"$template": "template in base64",
"id": 0,
"class": "face",
"confidence": 0.9171707630157471,
"template_size": 74
},
{
"bbox": [
0.16477272727272727,
0.22272727272727272,
0.875,
0.7954545454545454
],
"$template": "template in base64",
"id": 0,
"class": "face",
"confidence": 0.8453116416931152,
"template_size": 74
}
],
"verification": {
"distance": 4796,
"fa_r": 0,
"fr_r": 0.522820770740509,
"score": 0.9515298008918762
}
}
v2

API returns the following attributes with calculated values:

verification:

  • distance
  • fa_r
  • fr_r
  • score
Request example:
{
"objects": [
{
"class": "face",
"template": {
"_face_template_extractor_1000_12": {
"blob": "template in base64",
"format": "NDARRAY",
"dtype": "uint8",
"shape": [
296
]
}
}
},
{
"class": "face",
"template": {
"_face_template_extractor_1000_12": {
"blob": "template in base64",
"format": "NDARRAY",
"dtype": "uint8",
"shape": [
296
]
}
}
}
]
}
Response example:
{
"objects": [
{
"template": {
"_face_template_extractor_1000_12": {
"blob": "template in base64",
"format": "NDARRAY",
"dtype": "uint8",
"shape": [
296
]
}
},
"class": "face"
},
{
"template": {
"_face_template_extractor_1000_12": {
"blob": "template in base64",
"format": "NDARRAY",
"dtype": "uint8",
"shape": [
296
]
}
},
"class": "face"
}
],
"verification": {
"distance": 4796,
"fa_r": 0,
"fr_r": 0.522820770740509,
"score": 0.9515298008918762
}
}
Errors

This service returns the following set of errors:

Errors:
  1. The transmitted biometric template is not decoded.
{
"detail": "Failed to decode base64 string"
}