Sensity is a suite of customizable verification tools including liveness detection and face matching, ID documents verification, financial documents anti-fraud, deepfake detection, and more. These tools can be combined to score the trust of a user's identity and the authenticity of their image, video and PDF files.
The steps in this guide briefly describes how to integrate Sensity API with your app and backend. To create verification tasks, receive notifications and retrieve the results of the analysis follow the next steps:
To get started with Sensity, please send your request for creating a developer account via the contact form on www.sensity.ai.
Once you have a Sensity developer account, request an API token to the /auth/tokens API endpoint with your credentials using HTTP BasicAuth.
You will receive a JSON object containing an authorization Bearer token that you can use in the SDKs to make subsequent calls to the API.
You can integrate Sensity via one of our SDKs:
Install and import one of the SDKs in your application
Create a verification tasks depending on your or user needs
Complete the life cycle of a verification task handling a completion event in your application flow. A verification can be successful or not. You have to handle the completion event, retrieve the data and act according to your applications purpose.
Sensity provides an additional client-side JavaScript SDK which offers much improved security gurantees with respect to the passive liveness checks implemented in the Sensity API alone, and provides an additional active liveness check which can be enabled optionally.
To learn how to integrate into your frontend the Liveness detection SDK is better described in its own section.
We use webhooks to notify your backend when an event happens. Webhooks are useful to trigger actions for asynchronous events like when a verification task completes successfully or not. This is best described in the Webhooks section.
Use the Sensity API via our SDKs to retrieve specific data from the already completed verification tasks.
You can find more details on the endpoints and the response models in the API section of the documentation.
Sensity's API provides the following solutions that describe standard use cases. Each solution is composed of different services, each with its endpoint. API's users can serve customized use cases by grouping multiple services in their user flow.
Performs an analysis designed to recognize the latest AI-based image, video and audio manipulation and generation.
This solution is composed by:
The above services accept media files, such as images, videos, and audio, as well as URLs as input.
Input Type | Supported Formats |
---|---|
Image | jpg, jpeg, png, tiff, gif, webp, jfif |
Video | mkv, flv, mov, mp4, avi, webm |
Audio | wav, mp3, m4a, ogg, aac, flac |
Limit | Max Value |
---|---|
File size | 32 MB |
Video duration | 30 min |
Video resolution | 2560p |
Audio duration | 20 min |
Limit | Max Value |
---|---|
Video duration | 30 min |
Audio duration | 20 min |
URL source | Youtube, Instagram, Facebook, Tiktok, Twitter, Dailymotion |
The above services also accept URLs pointing to supported media files. For
example, https://www.example.com/image.jpg
,
https://www.example.com/video.mp4
or https://www.example.com/audio.wav
.
Performs an analysis to prevent fraud employing face matching, liveness detection, and authenticity of the identity card.
This solution is composed by:
The above services accept media files, such as images, videos, and PDFs, as input. Below are the supported input file formats and limitations for each service.
Input Type | Supported Formats |
---|---|
Image | jpg, jpeg, png, tiff, webp, jfif |
Video | mkv, flv, mov, mp4, avi, webm |
Document |
Limit | Max Value |
---|---|
File size | 32 MB |
Video duration | 30 min |
Video resolution | 2560p |
PDF size | 20 MB |
Tested on a e2-standard-4 (GCP) with 4 Intel vCPU and 16GB RAM, with an image input for liveness and face matching
Solution | Latency + stdev (s) | RPS (1/s) |
---|---|---|
Identity verification | 1.259 + 0.388 | 0.794 |
Extract data from documents, such as utility bills, invoices and payslips, in images or PDF files. Help users to automate verification and data entry, and support anti-fraud operations via computer vision and file forensics.
This solution is composed by:
The above services accept media files, such as images and PDFs, as input. Below are the supported input file formats and limitations for each service.
Input Type | Supported Formats |
---|---|
Image | jpg, jpeg, png, tiff |
Document |
Limit | Max Value |
---|---|
Image size | 32 MB |
PDF size | 20 MB |
PDF pages | 50 pages |
This section explains how to integrate the Sensity API into your web application using our JavaScript SDK. The guide assumes you have followed the steps in Initial setup and already have a developer account with an API token ready to use.
Create a .npmrc file in your project directory. Add the following line to the
.npmrc file, replacing
//registry.npmjs.org/:_authToken=<auth_token>
Install npm module:
npm install @sensityai/live-spoof-detection
Initialize the SDK and start the webcam to begin capturing video.
<!-- HTML for the webcam feed -->
<video
id="webcam"
autoplay
playsinline
style="width: 100%; height: auto;"
></video>
<!-- HTML for displaying liveness tasks -->
<div id="livenessTask"></div>
// Import the LiveSpoofDetection module
var LiveSpoofDetection = require("@sensityai/live-spoof-detection");
// Provide video HTML element to LiveSpoofDetection
const webcamElement = document.getElementById("webcam");
const spoof = new LiveSpoofDetection(webcamElement);
// Start webcam
spoof.startVideo().then((videoStarted) => {
if (videoStarted) {
console.log("Video started");
} else {
console.log("Video failed to load");
}
});
To access the Sensity API, set up authentication credentials and make calls to different services. See docs for more information about API usage.
// Initializing the SDK
const spoof = new LiveSpoofDetection(webcamElement);
// Accessing the Sensity API
var SensityApi = spoof.SensityApi;
var defaultClient = SensityApi.ApiClient.instance;
// Optionally, set the base path if you're using a different environment
defaultClient.basePath = "https://your.api.path";
// Set up authentication with your API token
var Bearer = defaultClient.authentications.Bearer;
Bearer.apiKey = "API_KEY";
// Make calls to different services
With the Liveness Detection SDK, additional active liveness checks can be enabled optionally. With active liveness, the verification process begins with a short series of challenges, where the user is asked to move their head in order to prove liveness. To enable, add the following once the video has been started:
<!-- HTML for displaying liveness tasks -->
<div id="livenessTask"></div>
// Div element for displaying liveness tasks
const livenessTask = document.getElementById("livenessTask");
// Run active liveness
spoof.startActiveLiveness(livenessTask);
Active liveness session has a timeout period that will cancel the session if the
challenges are not performed in time. Active liveness is complete once the
message Liveness is done
is shown in the livenessTask
element.
Passive liveness check allows users to submit photos or videos for liveness verification through the API. Here's how to integrate passive liveness checks into your application:
Capture a photo using the webcam and send it to the liveness service for verification.
// Capture selfie photo and send it for liveness verification
spoof.takeSelfie().then((selfie) => {
// We set up the SensityApi above
var apiInstance = new SensityApi.LivenessDetectionApi();
var task_id = null;
var file = selfie["blob"];
let opts = {
sessionInfo: selfie["sessionInfo"],
};
var callback = function (error, data, response) {
if (error) {
console.error(error);
} else {
task_id = data.task_id;
console.log("Job successfully sent");
}
};
// We send taken photo as file parameter and expect to fetch that API task id
apiInstance.createLivenessDetectionTaskFromFile(file, opts, callback);
});
Record a video using the webcam and send it to the liveness service for verification. You can set a duration of video by simply passing a parameter to takeVideo method.
// Start video recording and send it for liveness verification
spoof.takeVideo(duration_in_ms).then((video) => {
// We set up the SensityApi above
var apiInstance = new SensityApi.LivenessDetectionApi();
var task_id = null;
var file = video["blob"];
let opts = {
sessionInfo: video["sessionInfo"],
};
var callback = function (error, data, response) {
if (error) {
console.error(error);
} else {
task_id = data.task_id;
console.log("Job successfully sent");
}
};
// We send recorded video as file parameter and expect to fetch that API task id
apiInstance.createLivenessDetectionTaskFromFile(file, opts, callback);
});
To stop the video recording before the allocated duration_in_ms
use:
spoof.stopTakeVideo();
Once the task is done, you can retrieve the results using the task_id
provided
in the response returned by the API when you create the task.
let apiInstance = new SensityApi.LivenessDetectionApi();
let callback = (error, data, response) => {
if (error) {
console.error(error);
} else {
console.log("results ", JSON.stringify(response));
}
};
apiInstance.getLivenessDetectionTask(task_id, callback);
After installation you can import the package within script tags.
<script>
import LiveSpoofDetection from '@sensityai/live-spoof-detection';
</script>
You should create a webcam element and another empty div to display liveness tasks. You should give ref values to these elements to use them with sdk.
It is important to add a wrapper div has 'video-container' as class to camera element.
<template>
<div>
<div class="video-container">
<video ref="webcam" autoplay playsinline></video>
</div>
<div ref="livenessTask"></div>
</div>
</template>
Then you can initiate the sdk with camera element and use methods according to general Javascript documentation
<script>
import LiveSpoofDetection from '@sensityai/live-spoof-detection';
export default {
mounted() {
const webcamElement = this.$refs.webcam;
const livenessTask = this.$refs.livenessTask;
const spoof = new LiveSpoofDetection(webcamElement);
...
},
}
</script>
After installation you can import the package at the beginning of the file.
import LiveSpoofDetection from "@sensityai/live-spoof-detection";
You should create a webcam element and another empty div to display liveness tasks. You should give ref values to these elements to use them with sdk.
It is important to add a wrapper div has 'video-container' as class to camera element.
<div className="App">
<div class="video-container">
<video
ref="{webcam}"
className="webcamVideo"
autoplay
playsinline
width="{640}"
height="{480}"
></video>
</div>
<div ref="{liveness}"></div>
</div>
Then you can initiate the sdk with camera element and use methods according to general Javascript documentation
function App() {
const webcam = useRef();
const liveness = useRef();
useEffect(() => {
const init = async () => {
const webcamElement = webcam.current;
const livenessTask = liveness.current;
const spoof = new LiveSpoofDetection(webcamElement);
...
}
init();
}, []);
return (
<div className="App">
<div class="video-container">
<video ref={webcam} className="webcamVideo" autoPlay playsInline width={640} height={480}></video>
</div>
<div ref={liveness}></div>
</div>
);
}
export default App;
You can use sensity_api SDK for your mobile application frameworks such as:
//registry.npmjs.org/:_authToken={your_auth_token}
npm install @sensityai/sensity_api
import * as SensityApi from "@sensityai/sensity_api";
or;
var SensityApi = require("@sensityai/sensity_api");
const App = () => {
useEffect(() => {
const client = SensityApi.ApiClient.instance;
client.basePath = 'https://api.sensity.ai/';
const Bearer = client.authentications.Bearer;
Bearer.apiKey = 'YOUR_API_KEY';
const apiInstance = new SensityApi.LivenessDetectionApi();
apiInstance.getLivenessDetectionTask(TASK_ID);
...
}, []);
pubspec.yaml
filedependencies:
flutter:
sdk: flutter
dart_sdk:
path: './packages/dart_sdk'
flutter pub get
in your project folder.import 'package:dart_sdk/api.dart';
class SdkService {
final AIGeneratedImageDetectionApi aigeneratedimageDetectionApi = AIGeneratedImageDetectionApi();
Future<TasksCreateTaskResponse?> createAIGeneratedImageDetectionTaskFromFile(Uint8List bytes, [String fileName = 'file']) {
final file = http.MultipartFile.fromBytes(
'file', // Important, this is the field name
filename: fileName,
bytes,
contentType: MediaType.parse('multipart/form-data'),
);
return aigeneratedimageDetectionApi.createAIGeneratedImageDetectionTaskFromFile(file);
}
}
final imageBytes = YOUR_IMAGE_IN_BYTES_FORMAT;
final result = await SdkService().createAIGeneratedImageDetectionTaskFromFile(imageBytes);
Once a verification task is created by a user, Sensity starts processing the data asynchronously. As the processing happens at a later time and not in the response to your code execution. We need a mechanism to notify your system about status changes of the verification task.
This is where webhook comes in place, as it can notify your system once a verification task is processed.
A Webhook is nothing else than a simple user-provided POST
endpoint of a
server. The server should support:
POST
requests in application/json
type.
If the server runs behind a firewall you should whitelist our IP address to allow incoming traffic from our servers. Get in contact with Sensity support for more information.
You can use a service like webhook.site to test receiving webhook events. Or a tool such as ngrok to forward the requests to your local development server.
Sensity webbook service supports the following events:
data_extraction_analysis_complete
face_manipulation_analysis_complete
face_matching_analysis_complete
forensic_analysis_complete
ai_generated_image_detection_analysis_complete
id_document_authentication_analysis_complete
liveness_detection_analysis_complete
voice_analysis_complete
When a verification task is done your server will receive an event with the
resulting data of the task. An explanation of each event response is described
in the GET
endpoints response of the API reference and
Models reference.
To register a webhook, make a POST
request to
/webhooks/{event_name} using the name of the event and
the URL you want to register.
Once the analysis has finished, a POST
request will be sent by the API to the
registered URL with the analysis result.
Note: if no webhook URL is registered before the task analysis is requested, Sensity API will retry to send the result message for a total of 10 retries. After the 10th consecutive failure, the result can only be retrieved through polling.
This section shows you how to create a express
nodejs server and listen to
incoming webhook events. This assumes you have already
Register a webhook in the previous section.
npm install express cors body-parser
Add the following code to index.js
. This creates a server listening to
incoming requests at http://localhost:3030/webhook
.
const express = require("express");
const cors = require("cors");
const bodyParser = require("body-parser");
const app = express();
const port = 3000;
app.use(bodyParser.json());
app.use(cors());
app.post("/webhook", (req, res) => {
// do logic here
console.log(req.body);
res.status(200).send("Success");
});
app.listen(port, () => {
console.log(`Listening at http://localhost:${port}`);
});
Run the express
server with:
node index.js
And then create a task. You can do this with CURL
as its explained in the
API Reference examples or using one of the
Client-side SDKs.
As soon as the verification task is completed you should see the results in the terminal output.
Listening at http://localhost:3000
{
id: '808264ae-bd79-457e-89cc-b643e4f55517',
status: 'completed',
result: { live: 'Live' }
}
This section shows how to use the Sensity API endpoints to create and retrieve the results of a verification task using the provided endpoints. The guide assumes you have followed the steps in Initial setup and already have a developer account with an API token ready to use.
To create a specific analysis task you can send a POST
request with
/tasks/{task_name}
and the task parameters.
You will receive a JSON response with the following fields:
{
"report_id": "123456789",
"success": true
}
There are two ways to retrieve the results from the requested analysis:
Due to the asynchronous nature of the API, we strongly recommend using Webhooks.
To get the result of a task, make a GET
request to
/tasks/{task_name}/{report_id}
.
You will receive a JSON object with a status
field indicating the status of
the task.
{
"status": "message"
// ... other task related fields
}
The possible message value for status is one of:
completed: the other fields in the response will be populated with the analysis result.
in_progress: the analysis is still running. Try to call the endpoint later to get the result.
failed: something went wrong. Try to re-create the task and ask support if there are subsequent issues.
In this approach, the user associates a custom user-provided endpoint to an API analysis event. Once the event occurs, the API takes care of sending the result of the event to the endpoint.
How to set up a webhook server is discussed in the Webhooks server side SDK section.
Delete all the previously created authorization tokens for the user.
curl --request DELETE \ --url https://api.sensity.ai//auth/tokens \ --header 'Authorization: Basic REPLACE_BASIC_AUTH'
{- "deleted_tokens": [
- "deleted_tokens",
- "deleted_tokens"
], - "success": true
}
Get all authorization JSON Web Tokens (JWT) created for the user.
curl --request GET \ --url https://api.sensity.ai//auth/tokens \ --header 'Authorization: Basic REPLACE_BASIC_AUTH'
{- "claims": {
- "key": {
- "audience": "audience",
- "endsWith": "endsWith",
- "expiresAt": 0,
- "issuedAt": 6,
- "issuer": "issuer",
- "subject": "subject"
}
}, - "success": true
}
Generate an authorization JSON Web Token (JWT) to access the API.
curl --request POST \ --url https://api.sensity.ai//auth/tokens \ --header 'Authorization: Basic REPLACE_BASIC_AUTH'
{- "claims": {
- "audience": "audience",
- "endsWith": "endsWith",
- "expiresAt": 0,
- "issuedAt": 6,
- "issuer": "issuer",
- "subject": "subject"
}, - "id": "id",
- "success": true,
- "token": "token"
}
Delete a specific authorization JSON Web Token (JWT).
id required | string Token ID |
curl --request DELETE \ --url https://api.sensity.ai//auth/tokens/{id} \ --header 'Authorization: Basic REPLACE_BASIC_AUTH'
{- "deleted_tokens": [
- "deleted_tokens",
- "deleted_tokens"
], - "success": true
}
Get a specific authorization JSON Web Token (JWT) to access the API.
id required | string Token ID |
curl --request GET \ --url https://api.sensity.ai//auth/tokens/{id} \ --header 'Authorization: Basic REPLACE_BASIC_AUTH'
{- "claims": {
- "key": {
- "audience": "audience",
- "endsWith": "endsWith",
- "expiresAt": 0,
- "issuedAt": 6,
- "issuer": "issuer",
- "subject": "subject"
}
}, - "success": true
}
This analysis detects potential AI manipulation of faces present in images and videos, as in the case of face swaps, face reenactment and lipsync.
Supported input files: image / video
Create a task to analyze a media file for deepfake manipulations in the region of the face.
explain | boolean Default: false enable explanation of prediction; available only if input is predicted as fake |
file | string <binary> image / video file |
url | string (Experimental) media source url |
curl --request POST \ --url https://api.sensity.ai/tasks/face_manipulation \ --header 'Authorization: REPLACE_KEY_VALUE' \ --header 'content-type: multipart/form-data' \ --form file=string \ --form explain=false \ --form url=string
{- "report_id": "123456789",
- "success": true
}
Polling request for the result of a created face manipulation task.
id required | string report id |
curl --request GET \ --url https://api.sensity.ai/tasks/face_manipulation/{id} \ --header 'Authorization: REPLACE_KEY_VALUE'
{- "error": "error",
- "event_type": "event_type",
- "id": "123456789",
- "no_faces": false,
- "preview": "preview",
- "result": {
- "class_name": "real",
- "class_probability": 1,
- "explanation": [
- {
- "bbox": [
- 0,
- 0
], - "bbox_average_size": [
- 6,
- 6
], - "class_name": "real",
- "class_probability": 1,
- "face": "face",
- "face_heatmap": "face_heatmap",
- "fake_bbox": [
- 1,
- 1
], - "fake_bbox_average_size": [
- 5,
- 5
], - "fake_face": "fake_face",
- "fake_frame_idx": 5,
- "fake_frame_ms": 2.3021358869347655,
- "fake_probability": 1,
- "frame_idx": 7,
- "frame_ms": 9.301444243932576,
- "id": 3,
- "tracks": [
- {
- "bbox": [
- 2,
- 2
], - "frame_idx": 4,
- "frame_ms": 7.386281948385884,
- "frame_rate": 1.2315135367772556
}, - {
- "bbox": [
- 2,
- 2
], - "frame_idx": 4,
- "frame_ms": 7.386281948385884,
- "frame_rate": 1.2315135367772556
}
]
}, - {
- "bbox": [
- 0,
- 0
], - "bbox_average_size": [
- 6,
- 6
], - "class_name": "real",
- "class_probability": 1,
- "face": "face",
- "face_heatmap": "face_heatmap",
- "fake_bbox": [
- 1,
- 1
], - "fake_bbox_average_size": [
- 5,
- 5
], - "fake_face": "fake_face",
- "fake_frame_idx": 5,
- "fake_frame_ms": 2.3021358869347655,
- "fake_probability": 1,
- "frame_idx": 7,
- "frame_ms": 9.301444243932576,
- "id": 3,
- "tracks": [
- {
- "bbox": [
- 2,
- 2
], - "frame_idx": 4,
- "frame_ms": 7.386281948385884,
- "frame_rate": 1.2315135367772556
}, - {
- "bbox": [
- 2,
- 2
], - "frame_idx": 4,
- "frame_ms": 7.386281948385884,
- "frame_rate": 1.2315135367772556
}
]
}
], - "manipulation_type": "lipsync"
}, - "status": "completed"
}
This analysis detects AI-generated photo-realistic images and video. Images may be created for example by Generative Adversarial Networks (GAN) and Diffusion Models (DM) like Stable Diffusion, MidJourney, Dalle-2 and others. Videos may be made with tools such Runway, Sora, Luma, Pika, Kling and others. An optional explanation of prediction is provided for some classes of model generators and if input is predicted as fake. Available explanation types:
GANDetectorExplanations:
Returns an overlay image. Images
generated by this family of GANs can be recognized by the eyes and mouth fixed
position.Supported input files: image / video
The task analyses an image to check if it is generated with AI.
explain | boolean Default: false enable explanation of prediction; available only for some classes of model generators and if input is predicted as fake |
file | string <binary> image / video file |
url | string (Experimental) media source url |
curl --request POST \ --url https://api.sensity.ai/tasks/ai_generated_image_detection \ --header 'Authorization: REPLACE_KEY_VALUE' \ --header 'content-type: multipart/form-data' \ --form file=string \ --form explain=false \ --form url=string
{- "report_id": "123456789",
- "success": true
}
Polling request for the result of a created AIGeneratedImage task.
id required | string report id |
curl --request GET \ --url https://api.sensity.ai/tasks/ai_generated_image_detection/{id} \ --header 'Authorization: REPLACE_KEY_VALUE'
{- "error": "error",
- "event_type": "event_type",
- "id": "123456789",
- "result": {
- "class_name": "fake",
- "class_probability": 0.999,
- "explanation": {
- "details": {
- "eye_mouth_overlays_image": "eye_mouth_overlays_image",
- "heatmap": "heatmap",
- "heatmap_object": "heatmap_object"
}, - "eye_mouth_overlays_image": "eye_mouth_overlays_image",
- "heatmap": "heatmap",
- "heatmap_object": "heatmap_object",
- "type": "GANDetectorExplanations"
}, - "model_attribution": "stylegan",
- "model_attribution_probability": 0.9
}, - "status": "completed"
}
This analysis detects AI-generated voices and voice cloning in media. It also provides transcription and translation to English.
Supported input files: audio / video
explain | boolean Default: false enable explanation of prediction |
file | string <binary> audio file |
transcribe | boolean Default: true enable speech-to-text |
url | string (Experimental) media source url |
curl --request POST \ --url https://api.sensity.ai/tasks/voice_analysis \ --header 'Authorization: REPLACE_KEY_VALUE' \ --header 'content-type: application/x-www-form-urlencoded' \ --data file=string \ --data explain=false \ --data transcribe=true \ --data url=string
{- "report_id": "123456789",
- "success": true
}
Polling request for the result of a created voice analysis task.
id required | string report id |
curl --request GET \ --url https://api.sensity.ai/tasks/voice_analysis/{id} \ --header 'Authorization: REPLACE_KEY_VALUE'
{- "error": "error",
- "event_type": "event_type",
- "id": "c445f38a-f404-48c8-9054-31b289baa685",
- "result": {
- "class_name": "real",
- "class_probability": 1,
- "transcript": "{}"
}, - "status": "completed"
}
This service uses forensic analysis techniques to determine whether the file was digitally created or manipulated. In particular, the test will find traces of suspect software editing, use of screenshot software, file dates mismatch, multiple versions of the files, and more.
Supported input files: image / video / pdf / audio
Create a Forensic analysis task to check the given media file or document.
additional_info | string Additional information for report |
file | string <binary> image / video / pdf / audio |
is_extract_images | boolean Default: false request to send extracted images from pdf |
url | string (Experimental) media source url |
curl --request POST \ --url https://api.sensity.ai/tasks/forensic_analysis \ --header 'Authorization: REPLACE_KEY_VALUE' \ --header 'content-type: multipart/form-data' \ --form file=string \ --form is_extract_images=false \ --form additional_info=string \ --form url=string
{- "report_id": "123456789",
- "success": true
}
Polling request for the result of a created Forensic analysis task.
id required | string report id |
curl --request GET \ --url https://api.sensity.ai/tasks/forensic_analysis/{id} \ --header 'Authorization: REPLACE_KEY_VALUE'
{- "additional_info": "additional_info",
- "error": "error",
- "event_type": "event_type",
- "id": "123456789",
- "result": {
- "document_details": {
- "GPS_location": "2 North Bullard St, New York, NY 10013, USA",
- "digital_signature_by": "Signer",
- "password_protected": false,
- "source": "Physical"
}, - "extracted_images": [
- "extracted_images",
- "extracted_images"
], - "red_flags": {
- "FILE_EDITED": "The file was edited after creation",
- "METADATA_ERROR": "The file metadata was edited or deleted",
- "SUSPECT_SOFTWARE": "Suspect software creator or editor"
}, - "trigger_values": {
- "FILE_EDITED": [
- "The file was edited after creation"
], - "METADATA_ERROR": [
- "The file metadata was edited or deleted"
], - "SUSPECT_SOFTWARE": [
- "Suspect software creator or editor"
]
}
}, - "status": "completed"
}
This task analyses an ID image and verifies its authenticity by performing different security checks. For this task, the visual parts of the image and zones like the Machine Readable Zone (MRZ) are parsed.
Supported input files: image
Create a task to analyse the given ID document.
additional_info | string Additional information for report |
back | string <binary> ID back side |
extra_file | string <binary> [DEPRECATED] use back parameter. ID back side |
file | string <binary> [DEPRECATED] use front parameter. Use ID front side |
front | string <binary> ID front side |
skip_forgery | boolean Default: false skip forgery detection |
curl --request POST \ --url https://api.sensity.ai/tasks/id_document_authentication \ --header 'Authorization: REPLACE_KEY_VALUE' \ --header 'content-type: multipart/form-data' \ --form front=string \ --form back=string \ --form file=string \ --form extra_file=string \ --form skip_forgery=false \ --form additional_info=string
{- "report_id": "123456789",
- "success": true
}
Polling the result of an ID document authentication task.
id required | string report id |
curl --request GET \ --url https://api.sensity.ai/tasks/id_document_authentication/{id} \ --header 'Authorization: REPLACE_KEY_VALUE'
{- "additional_info": "additional_info",
- "error": "error",
- "event_type": "event_type",
- "id": "c445f38a-f404-48c8-9054-31b289baa685",
- "result": {
- "authenticity_checks": [
- {
- "authenticity_check_type": "image pattern",
- "bb": {
- "bottom": 10,
- "left": 0,
- "right": 10,
- "top": 0
}, - "diagnose_check": "pass",
- "errorMessage": "Image is out of focus",
- "etalon_image": "base64 string",
- "image": "base64 string",
- "probability": 99,
- "result": "Passed",
- "security_check": "security_feature_type_photo"
}, - {
- "authenticity_check_type": "image pattern",
- "bb": {
- "bottom": 10,
- "left": 0,
- "right": 10,
- "top": 0
}, - "diagnose_check": "pass",
- "errorMessage": "Image is out of focus",
- "etalon_image": "base64 string",
- "image": "base64 string",
- "probability": 99,
- "result": "Passed",
- "security_check": "security_feature_type_photo"
}
], - "document": {
- "details": {
- "country_name": "Vietnam",
- "document_name": "Vietnam - Passport (2005)",
- "iso_code": "VNM",
- "type": "Passport",
- "year": "2005"
}, - "position": {
- "left_bottom": {
- "x": 0,
- "y": 6
}, - "left_top": {
- "x": 0,
- "y": 6
}, - "right_bottom": {
- "x": 0,
- "y": 6
}, - "rigth_top": {
- "x": 0,
- "y": 6
}
}
}, - "duplicate_detection": {
- "portraitHash": "916a7f30b5344cda",
- "result": "Passed"
}, - "face_detection": {
- "bb": {
- "bottom": 10,
- "left": 0,
- "right": 10,
- "top": 0
}, - "orientation": 1
}, - "forgery_detection": {
- "mask": "base64 string for forgery mask",
- "portrait": {
- "bb": {
- "bottom": 10,
- "left": 0,
- "right": 10,
- "top": 0
}, - "confidence_score": 0,
- "label": 1,
- "page_index": 0
}, - "status": "Passed"
}, - "image_quality_checks": [
- {
- "errorMessage": "Image is out of focus",
- "probability": 99,
- "result": "Passed"
}, - {
- "errorMessage": "Image is out of focus",
- "probability": 99,
- "result": "Passed"
}
], - "images": [
- {
- "bb": {
- "bottom": 10,
- "left": 0,
- "right": 10,
- "top": 0
}, - "image": "base64 string",
- "name": "Portrait"
}, - {
- "bb": {
- "bottom": 10,
- "left": 0,
- "right": 10,
- "top": 0
}, - "image": "base64 string",
- "name": "Portrait"
}
], - "status": {
- "details": {
- "doc_type": "Passed",
- "duplicate_detection": "Passed",
- "expiry": "Passed",
- "forgery_detection": "Passed",
- "image_qa": "Passed",
- "mrz": "Passed",
- "pages_count": "Passed",
- "security": "Passed",
- "text": "Passed"
}, - "result": "Passed"
}, - "text": {
- "comparison": "Passed",
- "extracted": [
- {
- "barcode": "{}",
- "comparison": "Failed",
- "field_type": "field_type",
- "lexical_analysis": "{}",
- "mrz": "{}",
- "name": "name",
- "validity": "Failed",
- "visual": "{}"
}, - {
- "barcode": "{}",
- "comparison": "Failed",
- "field_type": "field_type",
- "lexical_analysis": "{}",
- "mrz": "{}",
- "name": "name",
- "validity": "Failed",
- "visual": "{}"
}
], - "result": "Passed",
- "validity": "Passed"
}
}, - "status": "completed"
}
This analysis performs a series of passive liveness checks on the face found in the input media. Its goal is to verify whether the face belongs to a real person in front of the camera vs. a face from a photo print, a digital device, or 2D/3D masks. For detection to work properly: look straight ahead, remove sunglasses or hat, make sure the light is good and avoid flash photography.
Supported input files: image / video
Checks if an image is live or a spoof attempt.
additional_info | string Additional information for report |
file required | string <binary> image / video |
session_info | string This is reserved for use by Sensity's Client-Side SDK. Do not fill it or your request will result in a failure |
curl --request POST \ --url https://api.sensity.ai/tasks/liveness_detection \ --header 'Authorization: REPLACE_KEY_VALUE' \ --header 'content-type: multipart/form-data' \ --form file=string \ --form session_info=string \ --form additional_info=string
{- "report_id": "123456789",
- "success": true
}
Polling request for the result of a Liveness detection task.
id required | string report id |
curl --request GET \ --url https://api.sensity.ai/tasks/liveness_detection/{id} \ --header 'Authorization: REPLACE_KEY_VALUE'
{- "additional_info": "additional_info",
- "error": "error",
- "event_type": "event_type",
- "id": "123456789",
- "no_faces": false,
- "result": {
- "image_quality_checks": "{}",
- "live": "Live"
}, - "session_info": "session_info",
- "status": "completed"
}
Enrich data from the text in the documents provided to the analysis. Furthermore, we get the following results:
Classify the document type into a set of known documents
Find a set of keywords provided to the endpoint
Find possible overlapping text boxes, to warn about possible fraud
Font analysis, with a font list and its frequencies
Extract meaningful data from the file according to the document type
Note: if you need to extract data from and verify the validity of ID documents, such that passports and national cards, please refer to the ID Document Authentication service. Data Extraction is meant for generic financial documents, such as invoices, bank statements, utility bills, etc.
Supported input files: image / pdf
Create a task to classify a document and find keywords.
date_order | string Enum: "DMY" "MDY" "YMD" "YDM" Date order as appears in the document, default: |
document_type | string Ground truth document type |
file required | string <binary> image / pdf |
is_extract_images | boolean Default: false request to send extracted images from pdf |
keywords | Array of strings Keywords to search in the document |
curl --request POST \ --url https://api.sensity.ai/tasks/data_extraction \ --header 'Authorization: REPLACE_KEY_VALUE' \ --header 'content-type: multipart/form-data' \ --form file=string \ --form 'keywords=["string"]' \ --form document_type=string \ --form is_extract_images=false \ --form date_order=DMY
{- "report_id": "123456789",
- "success": true
}
Polling request for the result of a created Data extraction task.
id required | string report id |
curl --request GET \ --url https://api.sensity.ai/tasks/data_extraction/{id} \ --header 'Authorization: REPLACE_KEY_VALUE'
{- "error": "error",
- "event_type": "event_type",
- "id": "c445f38a-f404-48c8-9054-31b289baa685",
- "result": {
- "document_type": "Unknown",
- "font_analysis": {
- "fonts": [
- null,
- null
], - "frequency": {
- "key": 0
}
}, - "keywords": [
- {
- "keyword": "key",
- "occurencies": [
- 0,
- 4,
- 7
]
}, - {
- "keyword": "key",
- "occurencies": [
- 0,
- 4,
- 7
]
}
], - "ocr": [
- {
- "corners": [
- [
- 6.027456183070403,
- 6.027456183070403
], - [
- 6.027456183070403,
- 6.027456183070403
]
], - "extracted_from": "ocr",
- "font_name": "Iosevka",
- "overlaps": [
- 3,
- 6,
- 7
], - "page": 1,
- "text": "words"
}, - {
- "corners": [
- [
- 6.027456183070403,
- 6.027456183070403
], - [
- 6.027456183070403,
- 6.027456183070403
]
], - "extracted_from": "ocr",
- "font_name": "Iosevka",
- "overlaps": [
- 3,
- 6,
- 7
], - "page": 1,
- "text": "words"
}
], - "overlapping_text": [
- [
- 1,
- 1
], - [
- 1,
- 1
]
]
}, - "status": "completed"
}
Delete all webhook URLs assigned to an event.
event required | string Enum: "ai_generated_image_detection_analysis_complete" "data_extraction_analysis_complete" "face_manipulation_analysis_complete" "face_matching_analysis_complete" "forensic_analysis_complete" "id_document_authentication_analysis_complete" "liveness_detection_analysis_complete" "voice_analysis_complete" Event Name |
curl --request DELETE \ --url https://api.sensity.ai//webhooks/{event} \ --header 'Authorization: REPLACE_KEY_VALUE'
{- "success": true
}
Get a list of webhook URLs assigned to a particular event.
event required | string Enum: "ai_generated_image_detection_analysis_complete" "data_extraction_analysis_complete" "face_manipulation_analysis_complete" "face_matching_analysis_complete" "forensic_analysis_complete" "id_document_authentication_analysis_complete" "liveness_detection_analysis_complete" "voice_analysis_complete" Event Name |
curl --request GET \ --url https://api.sensity.ai//webhooks/{event} \ --header 'Authorization: REPLACE_KEY_VALUE'
{- "success": true,
- "urls": [
- {
- "id": "id",
- "url": "url"
}, - {
- "id": "id",
- "url": "url"
}
]
}
Assign an event to a URL to which a request will be sent when that event occurs.
event required | string Enum: "ai_generated_image_detection_analysis_complete" "data_extraction_analysis_complete" "face_manipulation_analysis_complete" "face_matching_analysis_complete" "forensic_analysis_complete" "id_document_authentication_analysis_complete" "liveness_detection_analysis_complete" "voice_analysis_complete" Event Name |
url required | string Webhook URL |
curl --request POST \ --url https://api.sensity.ai//webhooks/{event} \ --header 'Authorization: REPLACE_KEY_VALUE' \ --header 'content-type: application/x-www-form-urlencoded' \ --data url=string
{- "id": "id",
- "success": true
}
Delete a webhook URL that is assigned to an event.
id required | string Webhook ID |
event required | string Enum: "ai_generated_image_detection_analysis_complete" "data_extraction_analysis_complete" "face_manipulation_analysis_complete" "face_matching_analysis_complete" "forensic_analysis_complete" "id_document_authentication_analysis_complete" "liveness_detection_analysis_complete" "voice_analysis_complete" Event Name |
curl --request DELETE \ --url https://api.sensity.ai//webhooks/{event}/%7Bid%7D \ --header 'Authorization: REPLACE_KEY_VALUE'
{- "success": true
}
document_type | string Enum: "Bank Statement" "Company Formation Document" "Invoice" "Payslip" "VAT" Document type identified by the service. Possible values are listed above. |
object (DataExtractionFontAnalysis) Object with information extracted from the fonts found in the document. The object contains the list of fonts in the document and its frequencies. This information is only available if the document contains embedded text. | |
Array of objects (DataExtractionKeywords) List with information about the keywords provided by the user as they appear in the document. | |
Array of objects (DataExtractionOCR) List with information, such as the location, of the text detected in the document whether embedded or detected by Sensity OCR model. | |
overlapping_text | Array of integers[ items ] List of indices of the text boxes which overlaps in the document. This includes:
The format is a list of list of two elements indicating which boxes overlap: Ex: |
{- "document_type": "Unknown",
- "font_analysis": {
- "fonts": [
- null,
- null
], - "frequency": {
- "key": 0
}
}, - "keywords": [
- {
- "keyword": "key",
- "occurencies": [
- 0,
- 4,
- 7
]
}, - {
- "keyword": "key",
- "occurencies": [
- 0,
- 4,
- 7
]
}
], - "ocr": [
- {
- "corners": [
- [
- 6.027456183070403,
- 6.027456183070403
], - [
- 6.027456183070403,
- 6.027456183070403
]
], - "extracted_from": "ocr",
- "font_name": "Iosevka",
- "overlaps": [
- 3,
- 6,
- 7
], - "page": 1,
- "text": "words"
}, - {
- "corners": [
- [
- 6.027456183070403,
- 6.027456183070403
], - [
- 6.027456183070403,
- 6.027456183070403
]
], - "extracted_from": "ocr",
- "font_name": "Iosevka",
- "overlaps": [
- 3,
- 6,
- 7
], - "page": 1,
- "text": "words"
}
], - "overlapping_text": [
- [
- 1,
- 1
], - [
- 1,
- 1
]
]
}
class_name | string Enum: "real" "fake" "no_faces" label attributed to image / video |
class_probability | number [ 0 .. 1 ] confidence score of the label attribution |
Array of objects (entities.FaceManipulationResultExplanation) face manipulation explanation | |
manipulation_type | string Enum: "faceswap" "face_reenactment" "lipsync" manipulation type label attributed to image / video |
{- "class_name": "real",
- "class_probability": 1,
- "explanation": [
- {
- "bbox": [
- 0,
- 0
], - "bbox_average_size": [
- 6,
- 6
], - "class_name": "real",
- "class_probability": 1,
- "face": "face",
- "face_heatmap": "face_heatmap",
- "fake_bbox": [
- 1,
- 1
], - "fake_bbox_average_size": [
- 5,
- 5
], - "fake_face": "fake_face",
- "fake_frame_idx": 5,
- "fake_frame_ms": 2.3021358869347655,
- "fake_probability": 1,
- "frame_idx": 7,
- "frame_ms": 9.301444243932576,
- "id": 3,
- "tracks": [
- {
- "bbox": [
- 2,
- 2
], - "frame_idx": 4,
- "frame_ms": 7.386281948385884,
- "frame_rate": 1.2315135367772556
}, - {
- "bbox": [
- 2,
- 2
], - "frame_idx": 4,
- "frame_ms": 7.386281948385884,
- "frame_rate": 1.2315135367772556
}
]
}, - {
- "bbox": [
- 0,
- 0
], - "bbox_average_size": [
- 6,
- 6
], - "class_name": "real",
- "class_probability": 1,
- "face": "face",
- "face_heatmap": "face_heatmap",
- "fake_bbox": [
- 1,
- 1
], - "fake_bbox_average_size": [
- 5,
- 5
], - "fake_face": "fake_face",
- "fake_frame_idx": 5,
- "fake_frame_ms": 2.3021358869347655,
- "fake_probability": 1,
- "frame_idx": 7,
- "frame_ms": 9.301444243932576,
- "id": 3,
- "tracks": [
- {
- "bbox": [
- 2,
- 2
], - "frame_idx": 4,
- "frame_ms": 7.386281948385884,
- "frame_rate": 1.2315135367772556
}, - {
- "bbox": [
- 2,
- 2
], - "frame_idx": 4,
- "frame_ms": 7.386281948385884,
- "frame_rate": 1.2315135367772556
}
]
}
], - "manipulation_type": "lipsync"
}
identification | object |
match | boolean |
score | number |
{- "identification": "{}",
- "match": true,
- "score": 0.8008281904610115
}
object (DocumentDetails) | |
extracted_images | Array of strings <byte> Contains the extracted images found in a PDF. Note: only available for PDF files and if |
object (RedFlags) Contains different warning messages that informs you if the file was modified or corrupted. Some of the fields are only available for some type of files and won't appear if the warning is not triggered. | |
object (TriggerValues) |
{- "document_details": {
- "GPS_location": "2 North Bullard St, New York, NY 10013, USA",
- "digital_signature_by": "Signer",
- "password_protected": false,
- "source": "Physical"
}, - "extracted_images": [
- "extracted_images",
- "extracted_images"
], - "red_flags": {
- "FILE_EDITED": "The file was edited after creation",
- "METADATA_ERROR": "The file metadata was edited or deleted",
- "SUSPECT_SOFTWARE": "Suspect software creator or editor"
}, - "trigger_values": {
- "FILE_EDITED": [
- "The file was edited after creation"
], - "METADATA_ERROR": [
- "The file metadata was edited or deleted"
], - "SUSPECT_SOFTWARE": [
- "Suspect software creator or editor"
]
}
}
class_name | string Enum: "real" "fake" image label (real or fake) |
class_probability | number [ 0 .. 1 ] confidence score of the label attributed to the image |
object (entities.AIGeneratedImageDetectionExplanation) | |
model_attribution | string Enum: "stylegan" "stylegan2" "stylegan3" "generated_photos" "midjourney" "dalle-2" "stable-diffusion" "glide" "firefly" "blue-willow" "unstable-diffusion" "stable-dream" prediction of the model used to generate the image, if classified as fake |
model_attribution_probability | number [ 0 .. 1 ] confidence score of the model attribution |
{- "class_name": "fake",
- "class_probability": 0.999,
- "explanation": {
- "details": {
- "eye_mouth_overlays_image": "eye_mouth_overlays_image",
- "heatmap": "heatmap",
- "heatmap_object": "heatmap_object"
}, - "eye_mouth_overlays_image": "eye_mouth_overlays_image",
- "heatmap": "heatmap",
- "heatmap_object": "heatmap_object",
- "type": "GANDetectorExplanations"
}, - "model_attribution": "stylegan",
- "model_attribution_probability": 0.9
}
additional_info | string |
error | string Error message |
event_type | string |
id | string Identifier of the task |
object (Result) | |
status | string Enum: "in_progress" "completed" "failed" Status of the current task
|
{- "additional_info": "additional_info",
- "error": "error",
- "event_type": "event_type",
- "id": "c445f38a-f404-48c8-9054-31b289baa685",
- "result": {
- "authenticity_checks": [
- {
- "authenticity_check_type": "image pattern",
- "bb": {
- "bottom": 10,
- "left": 0,
- "right": 10,
- "top": 0
}, - "diagnose_check": "pass",
- "errorMessage": "Image is out of focus",
- "etalon_image": "base64 string",
- "image": "base64 string",
- "probability": 99,
- "result": "Passed",
- "security_check": "security_feature_type_photo"
}, - {
- "authenticity_check_type": "image pattern",
- "bb": {
- "bottom": 10,
- "left": 0,
- "right": 10,
- "top": 0
}, - "diagnose_check": "pass",
- "errorMessage": "Image is out of focus",
- "etalon_image": "base64 string",
- "image": "base64 string",
- "probability": 99,
- "result": "Passed",
- "security_check": "security_feature_type_photo"
}
], - "document": {
- "details": {
- "country_name": "Vietnam",
- "document_name": "Vietnam - Passport (2005)",
- "iso_code": "VNM",
- "type": "Passport",
- "year": "2005"
}, - "position": {
- "left_bottom": {
- "x": 0,
- "y": 6
}, - "left_top": {
- "x": 0,
- "y": 6
}, - "right_bottom": {
- "x": 0,
- "y": 6
}, - "rigth_top": {
- "x": 0,
- "y": 6
}
}
}, - "duplicate_detection": {
- "portraitHash": "916a7f30b5344cda",
- "result": "Passed"
}, - "face_detection": {
- "bb": {
- "bottom": 10,
- "left": 0,
- "right": 10,
- "top": 0
}, - "orientation": 1
}, - "forgery_detection": {
- "mask": "base64 string for forgery mask",
- "portrait": {
- "bb": {
- "bottom": 10,
- "left": 0,
- "right": 10,
- "top": 0
}, - "confidence_score": 0,
- "label": 1,
- "page_index": 0
}, - "status": "Passed"
}, - "image_quality_checks": [
- {
- "errorMessage": "Image is out of focus",
- "probability": 99,
- "result": "Passed"
}, - {
- "errorMessage": "Image is out of focus",
- "probability": 99,
- "result": "Passed"
}
], - "images": [
- {
- "bb": {
- "bottom": 10,
- "left": 0,
- "right": 10,
- "top": 0
}, - "image": "base64 string",
- "name": "Portrait"
}, - {
- "bb": {
- "bottom": 10,
- "left": 0,
- "right": 10,
- "top": 0
}, - "image": "base64 string",
- "name": "Portrait"
}
], - "status": {
- "details": {
- "doc_type": "Passed",
- "duplicate_detection": "Passed",
- "expiry": "Passed",
- "forgery_detection": "Passed",
- "image_qa": "Passed",
- "mrz": "Passed",
- "pages_count": "Passed",
- "security": "Passed",
- "text": "Passed"
}, - "result": "Passed"
}, - "text": {
- "comparison": "Passed",
- "extracted": [
- {
- "barcode": "{}",
- "comparison": "Failed",
- "field_type": "field_type",
- "lexical_analysis": "{}",
- "mrz": "{}",
- "name": "name",
- "validity": "Failed",
- "visual": "{}"
}, - {
- "barcode": "{}",
- "comparison": "Failed",
- "field_type": "field_type",
- "lexical_analysis": "{}",
- "mrz": "{}",
- "name": "name",
- "validity": "Failed",
- "visual": "{}"
}
], - "result": "Passed",
- "validity": "Passed"
}
}, - "status": "completed"
}
image_quality_checks | object |
live | string Enum: "Live" "Spoof" |
{- "image_quality_checks": "{}",
- "live": "Live"
}
error | string Error message |
event_type | string |
id | string Identifier of the task |
object (entities.VoiceAnalysisResult) | |
status | string Enum: "in_progress" "completed" "failed" Status of the current task
|
{- "error": "error",
- "event_type": "event_type",
- "id": "c445f38a-f404-48c8-9054-31b289baa685",
- "result": {
- "class_name": "real",
- "class_probability": 1,
- "transcript": "{}"
}, - "status": "completed"
}