Sensity API (v2.29.0)

Download OpenAPI specification:Download

Sensity API Support: support@sensity.ai

Introduction

What is Sensity?

Sensity is a suite of customizable verification tools including liveness detection and face matching, ID documents verification, financial documents anti-fraud, deepfake detection, and more. These tools can be combined to score the trust of a user's identity and the authenticity of their image, video and PDF files.

Getting started

The steps in this guide briefly describes how to integrate Sensity API with your app and backend. To create verification tasks, receive notifications and retrieve the results of the analysis follow the next steps:

1. Create a developer account

To get started with Sensity, please send your request for creating a developer account via the www.sensity.ai website or by email to support@sensity.ai .

Once you have a Sensity developer account, request an API token to the /auth/tokens API endpoint with your credentials using HTTP BasicAuth.

You will receive a JSON object containing an authorization Bearer token that you can use in the SDKs to make subsequent calls to the API.

2. Integrate Sensity' API via SDKs

You can integrate Sensity via one of our SDKs:

  1. Install and import one of the SDKs in your application

  2. Create a verification tasks depending on your or user needs

  3. Complete the life cycle of a verification task handling a completion event in your application flow. A verification can be successful or not. You have to handle the completion event, retrieve the data and act according to your applications purpose.

3. (Optional) Integrate the Liveness detection SDK

Sensity provides an additional client-side JavaScript SDK which offers much improved security gurantees with respect to the passive liveness checks implemented in the Sensity API alone, and provides an additional active liveness check which can be enabled optionally.

To learn how to integrate into your frontend the Liveness detection SDK is better described in its own section.

4. (Optional) Setup the Webhooks

We use webhooks to notify your backend when an event happens. Webhooks are useful to trigger actions for asynchronous events like when a verification task completes successfully or not. This is best described in the Webhooks section.

5. Retrieve data from the API

Use the Sensity API via our SDKs to retrieve specific data from the already completed verification tasks.

You can find more details on the endpoints and the response models in the API section of the documentation.

Solutions

Sensity's API provides the following solutions that describe standard use cases. Each solution is composed of different services, each with its endpoint. API's users can serve customized use cases by grouping multiple services in their user flow.

Identity verification

Performs an analysis to prevent fraud employing face matching, liveness detection, and authenticity of the identity card.

This solution is composed by:

Supported input file formats

Input Type Supported Formats
Image jpg, png, tiff
Video webp, mkv, flv, mov, mp4, avi
Document pdf

Input file limitations

Limit Max Value
File size 32 MB
Video duration 30 min
Video resolution 1080p
PDF size 20 MB

Runtime performance

Tested on a e2-standard-4 (GCP) with 4 Intel vCPU and 16GB RAM, with an image input for liveness and face matching

Solution Latency + stdev (s) RPS (1/s)
Identity verification 1.259 + 0.388 0.794

Document verification

Extract data from documents, such as utility bills, invoices and payslips, in images or PDF files. Help users to automate verification and data entry, and support anti-fraud operations via computer vision and file forensics.

This solution is composed by:

Supported input formats

Input Type Supported Formats
Image jpg, png, tiff
Document pdf

Input file limitations

Limit Max Value
Image size 32 MB
PDF size 20 MB
PDF pages 50 pages

Deepfakes detection

Performs an analysis designed to recognize the latest AI-based image, video and audio manipulation and generation.

This solution is composed by:

Supported input formats

Input Type Supported Formats
Image jpg, png, tiff, gif
Video webp, mkv, flv, mov, mp4, avi
Audio wav, mp3, m4a, ogg, aac

Input file limitations

Limit Max Value
File size 32 MB
Video duration 30 min
Video resolution 1080p
Audio duration 20 min

Client-side SDKs

Web

JavaScript

This section explains how to integrate the Sensity API into your web application using our JavaScript SDK. The guide assumes you have followed the steps in Initial setup and already have a developer account with an API token ready to use.

1. Install

Create a .npmrc file in your project directory. Add the following line to the .npmrc file, replacing with the npm token provided by Sensity:

//registry.npmjs.org/:_authToken=<auth_token>

Install npm module:

npm install @sensityai/live-spoof-detection

2. Setup

Initialize the SDK and start the webcam to begin capturing video.

<!-- HTML for the webcam feed -->
<video id="webcam" autoplay playsinline style="width: 100%; height: auto;"></video>

<!-- HTML for displaying liveness tasks -->
<div id="livenessTask"></div>
// Import the LiveSpoofDetection module
var LiveSpoofDetection = require('@sensityai/live-spoof-detection');

// Provide video HTML element to LiveSpoofDetection
const webcamElement = document.getElementById('webcam');
const spoof = new LiveSpoofDetection(webcamElement);

// Start webcam
spoof.startVideo().then(videoStarted => {
  if(videoStarted) {
    console.log("Video started");
  } else {
    console.log("Video failed to load");
  }
});

To access the Sensity API, set up authentication credentials and make calls to different services. See docs for more information about API usage.

// Initializing the SDK
const spoof = new LiveSpoofDetection(webcamElement);

// Accessing the Sensity API
var SensityApi = spoof.SensityApi;
var defaultClient = SensityApi.ApiClient.instance;

// Optionally, set the base path if you're using a different environment
defaultClient.basePath = 'https://your.api.path';

// Set up authentication with your API token
var Bearer = defaultClient.authentications.Bearer;
Bearer.apiKey = 'API_KEY';

// Make calls to different services

3. Run Active Liveness

With the Liveness Detection SDK, additional active liveness checks can be enabled optionally. With active liveness, the verification process begins with a short series of challenges, where the user is asked to move their head in order to prove liveness. To enable, add the following once the video has been started:

<!-- HTML for displaying liveness tasks -->
<div id="livenessTask"></div>
// Div element for displaying liveness tasks
const livenessTask = document.getElementById("livenessTask");

// Run active liveness
spoof.startActiveLiveness(livenessTask);

Active liveness session has a timeout period that will cancel the session if the challenges are not performed in time. Active liveness is complete once the message Liveness is done is shown in the livenessTask element.

4. Passive Liveness

Passive liveness check allows users to submit photos or videos for liveness verification through the API. Here's how to integrate passive liveness checks into your application:

Take a liveness photo

Capture a photo using the webcam and send it to the liveness service for verification.

// Capture selfie photo and send it for liveness verification
spoof.takeSelfie().then((selfie) => {
  // We set up the SensityApi above
  var apiInstance = new SensityApi.LivenessDetectionApi();

  var task_id = null;
  var file = selfie["blob"];
  let opts = {
    sessionInfo: selfie["sessionInfo"],
  };

  var callback = function (error, data, response) {
    if (error) {
      console.error(error);
    } else {
      task_id = data.task_id;
      console.log("Job successfully sent");
    }
  };

  // We send taken photo as file parameter and expect to fetch that API task id
  apiInstance.createLivenessDetectionTaskFromFile(file, opts, callback);
});

Take a liveness video

Record a video using the webcam and send it to the liveness service for verification. You can set a duration of video by simply passing a parameter to takeVideo method.

// Start video recording and send it for liveness verification
spoof.takeVideo(duration_in_ms).then((video) => {
  // We set up the SensityApi above
  var apiInstance = new SensityApi.LivenessDetectionApi();

  var task_id = null;
  var file = video["blob"];
  let opts = {
    sessionInfo: video["sessionInfo"],
  };

  var callback = function (error, data, response) {
    if (error) {
      console.error(error);
    } else {
      task_id = data.task_id;
      console.log("Job successfully sent");
    }
  };

  // We send recorded video as file parameter and expect to fetch that API task id
  apiInstance.createLivenessDetectionTaskFromFile(file, opts, callback);
});

To stop the video recording before the allocated duration_in_ms use:

spoof.stopTakeVideo();

5. Retrieve the results

Once the task is done, you can retrieve the results using the task_id provided in the response returned by the API when you create the task.

  let apiInstance = new SensityApi.LivenessDetectionApi();

  let callback = (error, data, response) => {
    if (error) {
      console.error(error);
    } else {
      console.log("results ", JSON.stringify(response));
    }
  };

  apiInstance.getLivenessDetectionTask(task_id, callback);

Vue

After installation you can import the package within script tags.

<script>
  import LiveSpoofDetection from '@sensityai/live-spoof-detection';
</script>

You should create a webcam element and another empty div to display liveness tasks. You should give ref values to these elements to use them with sdk.

It is important to add a wrapper div has 'video-container' as class to camera element.

<template>
  <div>
    <div class="video-container">
      <video ref="webcam" autoplay playsinline></video>
    </div>
    <div ref="livenessTask"></div>
  </div>
</template>

Then you can initiate the sdk with camera element and use methods according to general Javascript documentation

<script>
import LiveSpoofDetection from '@sensityai/live-spoof-detection';

export default {
  mounted() {
    const webcamElement = this.$refs.webcam;
    const livenessTask = this.$refs.livenessTask;
    const spoof = new LiveSpoofDetection(webcamElement);
    ...
  },
}
</script>

React

After installation you can import the package at the beginning of the file.

import LiveSpoofDetection from "@sensityai/live-spoof-detection";

You should create a webcam element and another empty div to display liveness tasks. You should give ref values to these elements to use them with sdk.

It is important to add a wrapper div has 'video-container' as class to camera element.

<div className="App">
  <div class="video-container">
    <video
      ref="{webcam}"
      className="webcamVideo"
      autoplay
      playsinline
      width="{640}"
      height="{480}"
    ></video>
  </div>
  <div ref="{liveness}"></div>
</div>

Then you can initiate the sdk with camera element and use methods according to general Javascript documentation

function App() {
  const webcam = useRef();
  const liveness = useRef();

  useEffect(() => {
    const init = async () => {
      const webcamElement = webcam.current;
      const livenessTask = liveness.current;
      const spoof = new LiveSpoofDetection(webcamElement);
      ...
    }
    init();
  }, []);

  return (
    <div className="App">
      <div class="video-container">
        <video ref={webcam} className="webcamVideo" autoPlay playsInline width={640} height={480}></video>
      </div>
      <div ref={liveness}></div>
    </div>
  );
}

export default App;

Mobile

You can use sensity_api SDK for your mobile application frameworks such as:

  • React Native (Javascript)
  • Flutter (Dart)

React Native

  1. In the root of the folder where you want to install npm module, create an .npmrc file and add the following:
 //registry.npmjs.org/:_authToken={your_auth_token}
  1. Run this command to install package with npm:
  npm install @sensityai/sensity_api
  1. Import the package
import * as SensityApi from "@sensityai/sensity_api";
or;
var SensityApi = require("@sensityai/sensity_api");
  1. Initiate the sdk
const App = () => {
  useEffect(() => {
    const client = SensityApi.ApiClient.instance;
    client.basePath = 'https://api.sensity.ai/';

    const Bearer = client.authentications.Bearer;
    Bearer.apiKey = 'YOUR_API_KEY';

    const apiInstance = new SensityApi.LivenessDetectionApi();
    apiInstance.getLivenessDetectionTask(TASK_ID);
    ...
  }, []);
  1. Detailed documentation about services will be provided inside the npm package under the docs folder.

Flutter/Dart

  1. First add the dart sdk that provided by sensity team to your flutter project folder.
  2. Add dart sdk path to your pubspec.yaml file
dependencies:
  flutter:
    sdk: flutter
  dart_sdk:
    path: './packages/dart_sdk'
  1. Run flutter pub get in your project folder.
  2. Import sdk package to your flutter project.
import 'package:dart_sdk/api.dart';
  1. Then create a class for using SDK services. For example AI Generated Image detection Service
class SdkService {
  final AIGeneratedImageDetectionApi aigeneratedimageDetectionApi = AIGeneratedImageDetectionApi();

  Future<TasksCreateTaskResponse?> createAIGeneratedImageDetectionTaskFromFile(Uint8List bytes, [String fileName = 'file']) {
    final file = http.MultipartFile.fromBytes(
      'file', // Important, this is the field name
      filename: fileName,
      bytes,
      contentType: MediaType.parse('multipart/form-data'),
    );

    return aigeneratedimageDetectionApi.createAIGeneratedImageDetectionTaskFromFile(file);
  }
}
  1. Then it is possible to call sdk services inside your functions with SdkService class.
  final imageBytes = YOUR_IMAGE_IN_BYTES_FORMAT;
  final result = await SdkService().createAIGeneratedImageDetectionTaskFromFile(imageBytes);
  1. Detailed documentation about services will be provided inside the dart package under the docs folder.

Server-side SDKs

Webhooks

Once a verification task is created by a user, Sensity starts processing the data asynchronously. As the processing happens at a later time and not in the response to your code execution. We need a mechanism to notify your system about status changes of the verification task.

This is where webhook comes in place, as it can notify your system once a verification task is processed.

1. Webhook configuration

A Webhook is nothing else than a simple user-provided POST endpoint of a server. The server should support:

  • POST requests in application/json type.

  • If the server runs behind a firewall you should whitelist our IP address to allow incoming traffic from our servers. Get in contact with support@sensity.ai for more information.

You can use a service like webhook.site to test receiving webhook events. Or a tool such as ngrok to forward the requests to your local development server.

2. Event types

Sensity webbook service supports the following events:

  • data_extraction_analysis_complete

  • face_manipulation_analysis_complete

  • face_matching_analysis_complete

  • forensic_analysis_complete

  • ai_generated_image_detection_analysis_complete

  • id_detection_analysis_complete

  • id_document_authentication_analysis_complete

  • liveness_detection_analysis_complete

  • voice_analysis_complete

When a verification task is done your server will receive an event with the resulting data of the task. An explanation of each event response is described in the GET endpoints response of the API reference and Models reference.

3. Register a webhook URL

To register a webhook, make a POST request to /webhooks/{event_name} using the name of the event and the URL you want to register.

Once the analysis has finished, a POST request will be sent by the API to the registered URL with the analysis result.

Note: if no webhook URL is registered before the task analysis is requested, Sensity API will retry to send the result message for a total of 10 retries. After the 10th consecutive failure, the result can only be retrieved through polling.

NodeJS

This section shows you how to create a express nodejs server and listen to incoming webhook events. This assumes you have already Register a webhook in the previous section.

1. Install the required dependencies

npm install express cors body-parser

2. Create the server and the webhook endpoint

Add the following code to index.js. This creates a server listening to incoming requests at http://localhost:3030/webhook.

const express = require("express");
const cors = require("cors");
const bodyParser = require("body-parser");

const app = express();
const port = 3000;

app.use(bodyParser.json());
app.use(cors());

app.post("/webhook", (req, res) => {
  // do logic here
  console.log(req.body);

  res.status(200).send("Success");
});

app.listen(port, () => {
  console.log(`Listening at http://localhost:${port}`);
});

3. Run the server & create a task

Run the express server with:

node index.js

And then create a task. You can do this with CURL as its explained in the API Reference examples or using one of the Client-side SDKs.

As soon as the verification task is completed you should see the results in the terminal output.

Listening at http://localhost:3000
{
  id: '808264ae-bd79-457e-89cc-b643e4f55517',
  status: 'completed',
  result: { live: 'Live' }
}

API Reference

This section shows how to use the Sensity API endpoints to create and retrieve the results of a verification task using the provided endpoints. The guide assumes you have followed the steps in Initial setup and already have a developer account with an API token ready to use.

Create an analysis task

To create a specific analysis task you can send a POST request with /tasks/{task_name} and the task parameters.

You will receive a JSON response with the following fields:

{
  "report_id": "123456789",
  "success": true
}
  • report_id represents the requested analysis
  • success whether the task is successfully created or not

Retrieve an analysis result

There are two ways to retrieve the results from the requested analysis:

  1. Polling
  2. Webhooks

Due to the asynchronous nature of the API, we strongly recommend using Webhooks.

Polling

To get the result of a task, make a GET request to /tasks/{task_name}/{report_id}.

You will receive a JSON object with a status field indicating the status of the task.

{
  "status": "message"
  // ... other task related fields
}

The possible message value for status is one of:

  • completed: the other fields in the response will be populated with the analysis result.

  • in_progress: the analysis is still running. Try to call the endpoint later to get the result.

  • failed: something went wrong. Try to re-create the task and ask support if there are subsequent issues.

Webhooks

In this approach, the user associates a custom user-provided endpoint to an API analysis event. Once the event occurs, the API takes care of sending the result of the event to the endpoint.

How to set up a webhook server is discussed in the Webhooks server side SDK section.

URL input for the services

We support URL input for some services by using the url parameter. You can check on each endpoint of the API if the url parameter is available. At the moment we support the following:

  • A URL pointing to a media file
  • Youtube URLs
  • Instagram (only videos)
  • Facebook (only videos)
  • Tiktok
  • Twitter or X (images and videos)
  • Dailymotion

Authorization

Management for the user's API tokens

Delete all authorization tokens

Delete all the previously created authorization tokens for the user.

Authorizations:
BasicAuth

Responses

Request samples

curl --request DELETE \
  --url https://api.sensity.ai//auth/tokens \
  --header 'Authorization: Basic REPLACE_BASIC_AUTH'

Response samples

Content type
application/json
{
  • "deleted_tokens": [
    ],
  • "success": true
}

Get all authorization tokens

Get all authorization JSON Web Tokens (JWT) created for the user.

Authorizations:
BasicAuth

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai//auth/tokens \
  --header 'Authorization: Basic REPLACE_BASIC_AUTH'

Response samples

Content type
application/json
{
  • "claims": {
    },
  • "success": true
}

Generate an authorization token

Generate an authorization JSON Web Token (JWT) to access the API.

Authorizations:
BasicAuth

Responses

Request samples

curl --request POST \
  --url https://api.sensity.ai//auth/tokens \
  --header 'Authorization: Basic REPLACE_BASIC_AUTH'

Response samples

Content type
application/json
{
  • "claims": {
    },
  • "id": "id",
  • "success": true,
  • "token": "token"
}

Delete a specific authorization token by ID

Delete a specific authorization JSON Web Token (JWT).

Authorizations:
BasicAuth
path Parameters
id
required
string

Token ID

Responses

Request samples

curl --request DELETE \
  --url https://api.sensity.ai//auth/tokens/{id} \
  --header 'Authorization: Basic REPLACE_BASIC_AUTH'

Response samples

Content type
application/json
{
  • "deleted_tokens": [
    ],
  • "success": true
}

Get a specific authorization token by ID

Get a specific authorization JSON Web Token (JWT) to access the API.

Authorizations:
BasicAuth
path Parameters
id
required
string

Token ID

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai//auth/tokens/{id} \
  --header 'Authorization: Basic REPLACE_BASIC_AUTH'

Response samples

Content type
application/json
{
  • "claims": {
    },
  • "success": true
}

Data extraction

Enrich data from the text in the documents provided to the analysis. Furthermore, we get the following results:

  • Classify the document type into a set of known documents

  • Find a set of keywords provided to the endpoint

  • Find possible overlapping text boxes, to warn about possible fraud

  • Font analysis, with a font list and its frequencies

  • Extract meaningful data from the file according to the document type

Note: if you need to extract data from and verify the validity of ID documents, such that passports and national cards, please refer to the ID Document Authentication service. Data Extraction is meant for generic financial documents, such as invoices, bank statements, utility bills, etc.

Supported input files: image / pdf

Create a Data extraction task

Create a task to classify a document and find keywords.

Authorizations:
Bearer
Request Body schema: multipart/form-data
date_order
string
Enum: "DMY" "MDY" "YMD" "YDM"

Date order as appears in the document, default: DMY

document_type
string

Ground truth document type

file
required
string <binary>

image / pdf

is_extract_images
boolean
Default: false

request to send extracted images from pdf

keywords
Array of strings

Keywords to search in the document

Responses

Request samples

curl --request POST \
  --url https://api.sensity.ai/tasks/data_extraction \
  --header 'Authorization: REPLACE_KEY_VALUE' \
  --header 'content-type: multipart/form-data' \
  --form file=string \
  --form 'keywords=["string"]' \
  --form document_type=string \
  --form is_extract_images=false \
  --form date_order=DMY

Response samples

Content type
application/json
{
  • "report_id": "123456789",
  • "success": true
}

Get result of Data extraction task

Polling request for the result of a created Data extraction task.

Authorizations:
Bearer
path Parameters
id
required
string

report id

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai/tasks/data_extraction/{id} \
  --header 'Authorization: REPLACE_KEY_VALUE'

Response samples

Content type
application/json
{
  • "error": "error",
  • "event_type": "event_type",
  • "id": "c445f38a-f404-48c8-9054-31b289baa685",
  • "result": {
    },
  • "status": "completed"
}

Face manipulation

This analysis detects potential AI manipulation of faces present in images and videos, as in the case of face swaps, face reenactment and lipsync.

Supported input files: image / video

Create a Face manipulation detection task

Create a task to analyze a media file for deepfake manipulations in the region of the face.

Authorizations:
Bearer
Request Body schema: multipart/form-data
explain
boolean
Default: false

enable explanation of prediction; available only if input is predicted as fake

file
string <binary>

image / video file

url
string

(Experimental) URL pointing to: a link containing the media or a link to a webpage. Check above in the documentation the list of pages we support.

Responses

Request samples

curl --request POST \
  --url https://api.sensity.ai/tasks/face_manipulation \
  --header 'Authorization: REPLACE_KEY_VALUE' \
  --header 'content-type: multipart/form-data' \
  --form url=string \
  --form file=string \
  --form explain=false

Response samples

Content type
application/json
{
  • "report_id": "123456789",
  • "success": true
}

Get result of Face manipulation detection task

Polling request for the result of a created face manipulation task.

Authorizations:
Bearer
path Parameters
id
required
string

report id

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai/tasks/face_manipulation/{id} \
  --header 'Authorization: REPLACE_KEY_VALUE'

Response samples

Content type
application/json
{
  • "error": "error",
  • "event_type": "event_type",
  • "id": "123456789",
  • "no_faces": false,
  • "preview": "preview",
  • "result": {
    },
  • "status": "completed"
}

Face matching

Perform an analysis to authenticate a user by matching the person's face with the identity document uploaded.

Supported input files: image / pdf / video

Create a Face matching task

Create a task to check if the faces from two images are the same person.

Authorizations:
Bearer
Request Body schema: multipart/form-data
additional_info
string

Additional information for report

file
required
string <binary>

image / video

id_file
string <binary>

image / pdf

run_identification
boolean
Default: false

enable 1:N face recognition

Responses

Request samples

curl --request POST \
  --url https://api.sensity.ai/tasks/face_matching \
  --header 'Authorization: REPLACE_KEY_VALUE' \
  --header 'content-type: multipart/form-data' \
  --form file=string \
  --form id_file=string \
  --form run_identification=false \
  --form additional_info=string

Response samples

Content type
application/json
{
  • "report_id": "123456789",
  • "success": true
}

Get result of Face matching task

Polling request for the result of a created Face matching task.

Authorizations:
Bearer
path Parameters
id
required
string

report id

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai/tasks/face_matching/{id} \
  --header 'Authorization: REPLACE_KEY_VALUE'

Response samples

Content type
application/json
{
  • "additional_info": "additional_info",
  • "error": "error",
  • "event_type": "event_type",
  • "id": "123456789",
  • "no_faces": false,
  • "result": {
    },
  • "status": "completed"
}

Forensic analysis

This service uses forensic analysis techniques to determine whether the file was digitally created or manipulated. In particular, the test will find traces of suspect software editing, use of screenshot software, file dates mismatch, multiple versions of the files, and more.

Supported input files: image / video / pdf / audio

Create a Forensic analysis task

Create a Forensic analysis task to check the given document.

Authorizations:
Bearer
Request Body schema: multipart/form-data
additional_info
string

Additional information for report

file
string <binary>

image / video / pdf / audio

is_extract_images
boolean
Default: false

request to send extracted images from pdf

url
string

(Experimental) URL pointing to: a link containing the media or a link to a webpage. Check above in the documentation the list of pages we support.

Responses

Request samples

curl --request POST \
  --url https://api.sensity.ai/tasks/forensic_analysis \
  --header 'Authorization: REPLACE_KEY_VALUE' \
  --header 'content-type: multipart/form-data' \
  --form file=string \
  --form is_extract_images=false \
  --form additional_info=string \
  --form url=string

Response samples

Content type
application/json
{
  • "report_id": "123456789",
  • "success": true
}

Get result of Forensic analysis task

Polling request for the result of a created Forensic analysis task.

Authorizations:
Bearer
path Parameters
id
required
string

report id

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai/tasks/forensic_analysis/{id} \
  --header 'Authorization: REPLACE_KEY_VALUE'

Response samples

Content type
application/json
{
  • "additional_info": "additional_info",
  • "error": "error",
  • "event_type": "event_type",
  • "id": "123456789",
  • "result": {
    },
  • "status": "completed"
}

AI Generated Image detection

This analysis detects AI-generated photo-realistic images, created for example by Generative Adversarial Networks (GAN) and Diffusion Models (DM) like Stable Diffusion, MidJourney, Dalle-2 and others. An optional explanation of prediction is provided for some classes of model generators and if input is predicted as fake. Available explanation types:

  • GANDetectorExplanations: Returns an overlay image. Images generated by this family of GANs can be recognized by the eyes and mouth fixed position.
  • SignatureDBExplanations: If the image is found in Sensity's AI-Signatures DataBase, returns its original source, generation text prompt and generation timestamp.

Supported input files: image

Create a AI Generated Image detection task

The task analyses an image to check if it is generated with AI.

Authorizations:
Bearer
Request Body schema: multipart/form-data
explain
boolean
Default: false

enable explanation of prediction; available only for some classes of model generators and if input is predicted as fake

file
string <binary>

image file

url
string

(Experimental) URL pointing to: a link containing the media or a link to a webpage. Check above in the documentation the list of pages we support.

Responses

Request samples

curl --request POST \
  --url https://api.sensity.ai/tasks/ai_generated_image_detection \
  --header 'Authorization: REPLACE_KEY_VALUE' \
  --header 'content-type: multipart/form-data' \
  --form file=string \
  --form explain=false \
  --form url=string

Response samples

Content type
application/json
{
  • "report_id": "123456789",
  • "success": true
}

Get result of AI Generated Image detection task

Polling request for the result of a created AIGeneratedImage task.

Authorizations:
Bearer
path Parameters
id
required
string

report id

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai/tasks/ai_generated_image_detection/{id} \
  --header 'Authorization: REPLACE_KEY_VALUE'

Response samples

Content type
application/json
{
  • "error": "error",
  • "event_type": "event_type",
  • "id": "123456789",
  • "result": {
    },
  • "status": "completed"
}

ID document authentication

This task analyses an ID image and verifies its authenticity by performing different security checks. For this task, the visual parts of the image and zones like the Machine Readable Zone (MRZ) are parsed.

Supported input files: image

Create a ID document authentication task

Create a task to analyse the given ID document.

Authorizations:
Bearer
Request Body schema: multipart/form-data
additional_info
string

Additional information for report

back
string <binary>

ID back side

extra_file
string <binary>

DEPRECATED use back parameter. ID back side

file
string <binary>

DEPRECATED use front parameter. Use ID front side

front
string <binary>

ID front side

skip_forgery
boolean
Default: false

skip forgery detection

Responses

Request samples

curl --request POST \
  --url https://api.sensity.ai/tasks/id_document_authentication \
  --header 'Authorization: REPLACE_KEY_VALUE' \
  --header 'content-type: multipart/form-data' \
  --form front=string \
  --form back=string \
  --form file=string \
  --form extra_file=string \
  --form skip_forgery=false \
  --form additional_info=string

Response samples

Content type
application/json
{
  • "report_id": "123456789",
  • "success": true
}

Get result of ID document authentication task

Polling the result of an ID document authentication task.

Authorizations:
Bearer
path Parameters
id
required
string

report id

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai/tasks/id_document_authentication/{id} \
  --header 'Authorization: REPLACE_KEY_VALUE'

Response samples

Content type
application/json
{
  • "additional_info": "additional_info",
  • "error": "error",
  • "event_type": "event_type",
  • "id": "c445f38a-f404-48c8-9054-31b289baa685",
  • "result": {
    },
  • "status": "completed"
}

Liveness detection

This analysis performs a series of passive liveness checks on the face found in the input media. Its goal is to verify whether the face belongs to a real person in front of the camera vs. a face from a photo print, a digital device, or 2D/3D masks. For detection to work properly: look straight ahead, remove sunglasses or hat, make sure the light is good and avoid flash photography.

Supported input files: image / video

Create a Liveness detection task

Checks if an image is live or a spoof attempt.

Authorizations:
Bearer
Request Body schema: multipart/form-data
additional_info
string

Additional information for report

file
required
string <binary>

image / video

session_info
string

This is reserved for use by Sensity's Client-Side SDK. Do not fill it or your request will result in a failure

Responses

Request samples

curl --request POST \
  --url https://api.sensity.ai/tasks/liveness_detection \
  --header 'Authorization: REPLACE_KEY_VALUE' \
  --header 'content-type: multipart/form-data' \
  --form file=string \
  --form session_info=string \
  --form additional_info=string

Response samples

Content type
application/json
{
  • "report_id": "123456789",
  • "success": true
}

Get result of Liveness detection task

Polling request for the result of a Liveness detection task.

Authorizations:
Bearer
path Parameters
id
required
string

report id

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai/tasks/liveness_detection/{id} \
  --header 'Authorization: REPLACE_KEY_VALUE'

Response samples

Content type
application/json
{
  • "additional_info": "additional_info",
  • "error": "error",
  • "event_type": "event_type",
  • "id": "123456789",
  • "no_faces": false,
  • "result": {
    },
  • "session_info": "session_info",
  • "status": "completed"
}

Voice analysis

This analysis detects AI-generated voices and voice cloning in audio.

Supported input files: audio

Create a Voice analysis task

Authorizations:
Bearer
Request Body schema:
explain
boolean
Default: false

enable explanation of prediction

file
string <binary>

audio file

url
string

(Experimental) URL pointing to: a link containing the media or a link to a webpage. Check above in the documentation the list of pages we support.

Responses

Request samples

curl --request POST \
  --url https://api.sensity.ai/tasks/voice_analysis \
  --header 'Authorization: REPLACE_KEY_VALUE' \
  --header 'content-type: application/x-www-form-urlencoded' \
  --data file=string \
  --data explain=false \
  --data url=string

Response samples

Content type
application/json
{
  • "report_id": "123456789",
  • "success": true
}

Get result of Voice analysis task

Polling request for the result of a created voice analysis task.

Authorizations:
Bearer
path Parameters
id
required
string

report id

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai/tasks/voice_analysis/{id} \
  --header 'Authorization: REPLACE_KEY_VALUE'

Response samples

Content type
application/json
{
  • "error": "error",
  • "event_type": "event_type",
  • "id": "c445f38a-f404-48c8-9054-31b289baa685",
  • "result": "{}",
  • "status": "completed"
}

Webhooks

Webhook management

Delete a webhook URL

Delete a webhook URL that is assigned to an event.

Authorizations:
Bearer
path Parameters
event
required
string

Event Name

Responses

Request samples

curl --request DELETE \
  --url https://api.sensity.ai//webhooks/{event} \
  --header 'Authorization: REPLACE_KEY_VALUE'

Response samples

Content type
application/json
{
  • "success": true
}

Get a list of webhook URLs

Get a list of webhook URLs assigned to a particular event.

Authorizations:
Bearer
path Parameters
event
required
string

Event Name

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai//webhooks/{event} \
  --header 'Authorization: REPLACE_KEY_VALUE'

Response samples

Content type
application/json
{
  • "success": true,
  • "url": "url"
}

Assign an event to a webhook URL

Assign an event to a URL to which a request will be sent when that event occurs.

Authorizations:
Bearer
path Parameters
event
required
string
Enum: "data_extraction_analysis_complete" "face_manipulation_analysis_complete" "face_matching_analysis_complete" "forensic_analysis_complete" "ai_generated_image_detection_analysis_complete" "id_detection_analysis_complete" "id_document_authentication_analysis_complete" "liveness_detection_analysis_complete" "voice_analysis_complete"

Event Name

Request Body schema: application/x-www-form-urlencoded
url
required
string

Webhook URL

Responses

Request samples

curl --request POST \
  --url https://api.sensity.ai//webhooks/{event} \
  --header 'Authorization: REPLACE_KEY_VALUE' \
  --header 'content-type: application/x-www-form-urlencoded' \
  --data url=string

Response samples

Content type
application/json
{
  • "success": true
}

Models

Data extraction

document_type
string
Enum: "Bank Statement" "Company Formation Document" "Invoice" "Payslip" "VAT"

Document type identified by the service. Possible values are listed above.

object (DataExtractionFontAnalysis)

Object with information extracted from the fonts found in the document. The object contains the list of fonts in the document and its frequencies. This information is only available if the document contains embedded text.

Array of objects (DataExtractionKeywords)

List with information about the keywords provided by the user as they appear in the document.

Array of objects (DataExtractionOCR)

List with information, such as the location, of the text detected in the document whether embedded or detected by Sensity OCR model.

overlapping_text
Array of integers[ items ]

List of indices of the text boxes which overlaps in the document. This includes:

  • Overlap between embedded text boxes
  • Overlap between embedded and OCR detected boxes
  • Don't include overlap between OCR detected boxes

The format is a list of list of two elements indicating which boxes overlap:

Ex: [[0, 4], [3, 4]] indicates that box 4 overlaps with 0 and 3.

{
  • "document_type": "Unknown",
  • "font_analysis": {
    },
  • "keywords": [
    ],
  • "ocr": [
    ],
  • "overlapping_text": [
    ]
}

Face manipulation

class_name
string
Enum: "real" "fake"

label attributed to image / video (real or fake)

class_probability
number [ 0 .. 1 ]

confidence score of the label attribution

Array of objects (entities.FaceManipulationResultExplanation)

face manipulation explanation

manipulation_type
string
Enum: "faceswap" "face_reenactment" "face_animation" "lipsync"

manipulation type label attributed to image / video (real or fake)

{
  • "class_name": "real",
  • "class_probability": 1,
  • "explanation": [
    ],
  • "manipulation_type": "lipsync"
}

Face matching

identification
object
match
boolean
score
number
{
  • "identification": "{}",
  • "match": true,
  • "score": 0.8008281904610115
}

Forensic analysis

object (DocumentDetails)
extracted_images
Array of strings <byte>

Contains the extracted images found in a PDF.

Note: only available for PDF files and if is_extract_images is set to true in the previous request.

object (RedFlags)

Contains different warning messages that informs you if the file was modified or corrupted. Some of the fields are only available for some type of files and won't appear if the warning is not triggered.

object (TriggerValues)
{
  • "document_details": {
    },
  • "extracted_images": [
    ],
  • "red_flags": {
    },
  • "trigger_values": {
    }
}

AI Generated Image detection

class_name
string
Enum: "real" "fake"

image label (real or fake)

class_probability
number [ 0 .. 1 ]

confidence score of the label attributed to the image

object (entities.AIGeneratedImageDetectionExplanation)
model_attribution
string
Enum: "stylegan" "stylegan2" "stylegan3" "generated_photos" "midjourney" "dalle-2" "stable-diffusion" "glide" "firefly" "blue-willow" "unstable-diffusion" "stable-dream"

prediction of the model used to generate the image, if classified as fake

model_attribution_probability
number [ 0 .. 1 ]

confidence score of the model attribution

{
  • "class_name": "fake",
  • "class_probability": 0.999,
  • "explanation": {
    },
  • "model_attribution": "stylegan",
  • "model_attribution_probability": 0.9
}

ID document authentication

additional_info
string
error
string

Error message

event_type
string
id
string

Identifier of the task

object (Result)
status
string
Enum: "in_progress" "completed" "failed"

Status of the current task

  • in_progress - Analysis in progress
  • completed - Task finished successfully
  • failed - Task finished with errors
{
  • "additional_info": "additional_info",
  • "error": "error",
  • "event_type": "event_type",
  • "id": "c445f38a-f404-48c8-9054-31b289baa685",
  • "result": {
    },
  • "status": "completed"
}

Liveness detection

image_quality_checks
object
live
string
Enum: "Live" "Spoof"
{
  • "image_quality_checks": "{}",
  • "live": "Live"
}

Voice analysis

error
string

Error message

event_type
string
id
string

Identifier of the task

result
object
status
string
Enum: "in_progress" "completed" "failed"

Status of the current task

  • in_progress - Analysis in progress
  • completed - Task finished successfully
  • failed - Task finished with errors
{
  • "error": "error",
  • "event_type": "event_type",
  • "id": "c445f38a-f404-48c8-9054-31b289baa685",
  • "result": "{}",
  • "status": "completed"
}