Sensity API (v2.38.1)

Download OpenAPI specification:Download

Introduction

What is Sensity?

Sensity is a customizable suite of tools for deepfake detection, offering cloud-based or on-premise analysis of videos, images, and audio through both UI and API access. Users can analyze files or URLs for multilayer assessment within seconds.

Getting started

The steps in this guide briefly describes how to integrate Sensity API with your app and backend. To create verification tasks, receive notifications and retrieve the results of the analysis follow the next steps:

1. Create a developer account

To get started with Sensity, please send your request for creating a developer account via the contact form on www.sensity.ai.

Once you have a Sensity developer account, request an API token to the /auth/tokens API endpoint with your credentials using HTTP BasicAuth.

You will receive a JSON object containing an authorization Bearer token that you can use in the SDKs to make subsequent calls to the API.

2. (Optional) Setup the Webhooks

We use webhooks to notify your backend when an event happens. Webhooks are useful to trigger actions for asynchronous events like when a verification task completes successfully or not. This is best described in the Webhooks section.

3. Retrieve data from the API

Use the Sensity API via our SDKs to retrieve specific data from the already completed verification tasks.

You can find more details on the endpoints and the response models in the API section of the documentation.

Solutions

Sensity's API provides the following solutions that describe standard use cases. Each solution is composed of different services, each with its endpoint. API's users can serve customized use cases by grouping multiple services in their user flow.

Deepfakes Detection

Performs an analysis designed to recognize the latest AI-based image, video and audio manipulation and generation.

This solution is composed by:

The above services accept media files, such as images, videos, and audio, as well as URLs as input.

Supported Input Formats

Input Type Supported Formats
Image jpg, jpeg, png, tiff, gif, webp, jfif
Video mkv, flv, mov, mp4, avi, webm, 3gpp
Audio wav, mp3, m4a, ogg, aac, flac

Input File Limitations

Limit Max Value
File size 32 MB
Video duration 30 min
Video resolution 2560p
Audio duration 20 min

Input URL Limitations

Limit Max Value
Video duration 30 min
Audio duration 20 min
URL source Youtube, Instagram, Facebook, Tiktok, Twitter, Dailymotion

The above services also accept URLs pointing to supported media files. For example, https://www.example.com/image.jpg, https://www.example.com/video.mp4 or https://www.example.com/audio.wav.

API Reference

This section shows how to use the Sensity API endpoints to create and retrieve the results of a verification task using the provided endpoints. The guide assumes you have followed the steps in Initial setup and already have a developer account with an API token ready to use.

Create an analysis task

To create a specific analysis task you can send a POST request with /tasks/{task_name} and the task parameters.

You will receive a JSON response with the following fields:

{
  "report_id": "123456789",
  "success": true
}
  • report_id represents the requested analysis
  • success whether the task is successfully created or not

Retrieve an analysis result

There are two ways to retrieve the results from the requested analysis:

  1. Polling
  2. Webhooks

Due to the asynchronous nature of the API, we strongly recommend using Webhooks.

Polling

To get the result of a task, make a GET request to /tasks/{task_name}/{report_id}.

You will receive a JSON object with a status field indicating the status of the task.

{
  "status": "message"
  // ... other task related fields
}

The possible message value for status is one of:

  • completed: the other fields in the response will be populated with the analysis result.

  • in_progress: the analysis is still running. Try to call the endpoint later to get the result.

  • failed: something went wrong. Try to re-create the task and ask support if there are subsequent issues.

Webhooks

In this approach, the user associates a custom user-provided endpoint to an API analysis event. Once the event occurs, the API takes care of sending the result of the event to the endpoint.

How to set up a webhook server is discussed in the Webhooks server side SDK section.

Server-side SDKs

Webhooks

Once a verification task is created by a user, Sensity starts processing the data asynchronously. As the processing happens at a later time and not in the response to your code execution. We need a mechanism to notify your system about status changes of the verification task.

This is where webhook comes in place, as it can notify your system once a verification task is processed.

1. Webhook configuration

A Webhook is nothing else than a simple user-provided POST endpoint of a server. The server should support:

  • POST requests in application/json type.

  • If the server runs behind a firewall you should whitelist our IP address to allow incoming traffic from our servers. Get in contact with Sensity support for more information.

You can use a service like webhook.site to test receiving webhook events. Or a tool such as ngrok to forward the requests to your local development server.

2. Event types

Sensity webbook service supports the following events:

  • face_manipulation_analysis_complete
  • ai_generated_image_detection_analysis_complete
  • voice_analysis_complete
  • forensic_analysis_complete

When a verification task is done your server will receive an event with the resulting data of the task. An explanation of each event response is described in the GET endpoints response of the API reference and Models reference.

3. Register a webhook URL

To register a webhook, make a POST request to /webhooks/{event_name} using the name of the event and the URL you want to register.

Once the analysis has finished, a POST request will be sent by the API to the registered URL with the analysis result.

Note: if no webhook URL is registered before the task analysis is requested, Sensity API will retry to send the result message for a total of 10 retries. After the 10th consecutive failure, the result can only be retrieved through polling.

NodeJS

This section shows you how to create a express nodejs server and listen to incoming webhook events. This assumes you have already Register a webhook in the previous section.

1. Install the required dependencies

npm install express cors body-parser

2. Create the server and the webhook endpoint

Add the following code to index.js. This creates a server listening to incoming requests at http://localhost:3030/webhook.

const express = require("express");
const cors = require("cors");
const bodyParser = require("body-parser");

const app = express();
const port = 3000;

app.use(bodyParser.json());
app.use(cors());

app.post("/webhook", (req, res) => {
  // do logic here
  console.log(req.body);

  res.status(200).send("Success");
});

app.listen(port, () => {
  console.log(`Listening at http://localhost:${port}`);
});

3. Run the server & create a task

Run the express server with:

node index.js

And then create a task. You can do this with CURL as its explained in the API Reference examples or using one of the Client-side SDKs.

As soon as the verification task is completed you should see the results in the terminal output.

Listening at http://localhost:3000
{
  id: '808264ae-bd79-457e-89cc-b643e4f55517',
  status: 'completed',
  result: { live: 'Live' }
}

Authorization

Management for the user's API tokens

Delete all authorization tokens

Delete all the previously created authorization tokens for the user.

Authorizations:
BasicAuth

Responses

Request samples

curl --request DELETE \
  --url https://api.sensity.ai//auth/tokens \
  --header 'Authorization: Basic REPLACE_BASIC_AUTH'

Response samples

Content type
application/json
{
  • "deleted_tokens": [
    ],
  • "success": true
}

Get all authorization tokens

Get all authorization JSON Web Tokens (JWT) created for the user.

Authorizations:
BasicAuth

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai//auth/tokens \
  --header 'Authorization: Basic REPLACE_BASIC_AUTH'

Response samples

Content type
application/json
{
  • "claims": {
    },
  • "success": true
}

Generate an authorization token

Generate an authorization JSON Web Token (JWT) to access the API.

Authorizations:
BasicAuth

Responses

Request samples

curl --request POST \
  --url https://api.sensity.ai//auth/tokens \
  --header 'Authorization: Basic REPLACE_BASIC_AUTH'

Response samples

Content type
application/json
{
  • "claims": {
    },
  • "id": "id",
  • "success": true,
  • "token": "token"
}

Delete a specific authorization token by ID

Delete a specific authorization JSON Web Token (JWT).

Authorizations:
BasicAuth
path Parameters
id
required
string

Token ID

Responses

Request samples

curl --request DELETE \
  --url https://api.sensity.ai//auth/tokens/{id} \
  --header 'Authorization: Basic REPLACE_BASIC_AUTH'

Response samples

Content type
application/json
{
  • "deleted_tokens": [
    ],
  • "success": true
}

Get a specific authorization token by ID

Get a specific authorization JSON Web Token (JWT) to access the API.

Authorizations:
BasicAuth
path Parameters
id
required
string

Token ID

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai//auth/tokens/{id} \
  --header 'Authorization: Basic REPLACE_BASIC_AUTH'

Response samples

Content type
application/json
{
  • "claims": {
    },
  • "success": true
}

Face manipulation

This analysis detects potential AI manipulation of faces present in images and videos, as in the case of face swaps, face reenactment and lipsync.

Supported input files: image / video

Create a Face manipulation detection task

Create a task to analyze a media file for deepfake manipulations in the region of the face.

Authorizations:
Bearer
Request Body schema: multipart/form-data
explain
boolean
Default: false

enable explanation of prediction; available only if input is predicted as fake

file
string <binary>

image / video file

url
string

(Experimental) media source url

Responses

Request samples

curl --request POST \
  --url https://api.sensity.ai/tasks/face_manipulation \
  --header 'Authorization: REPLACE_KEY_VALUE' \
  --header 'content-type: multipart/form-data' \
  --form file=string \
  --form explain=false \
  --form url=string

Response samples

Content type
application/json
{
  • "report_id": "123456789",
  • "success": true
}

Get result of Face manipulation detection task

Polling request for the result of a created face manipulation task.

Authorizations:
Bearer
path Parameters
id
required
string

report id

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai/tasks/face_manipulation/{id} \
  --header 'Authorization: REPLACE_KEY_VALUE'

Response samples

Content type
application/json
{
  • "error": "error",
  • "event_type": "event_type",
  • "id": "123456789",
  • "preview": "preview",
  • "result": {
    },
  • "status": "completed"
}

AI Generated Image detection

This analysis detects AI-generated photo-realistic images and video. Images may be created for example by Generative Adversarial Networks (GAN) and Diffusion Models (DM) like Stable Diffusion, MidJourney, Dalle-2 and others. Videos may be made with tools such Runway, Sora, Luma, Pika, Kling and others. An optional explanation of prediction is provided for some classes of model generators and if input is predicted as fake.

Supported input files: image / video

Create a AI Generated Image detection task

The task analyses an image to check if it is generated with AI.

Authorizations:
Bearer
Request Body schema: multipart/form-data
explain
boolean
Default: false

enable explanation of prediction; available only for some classes of model generators and if input is predicted as fake

file
string <binary>

image / video file

url
string

(Experimental) media source url

Responses

Request samples

curl --request POST \
  --url https://api.sensity.ai/tasks/ai_generated_image_detection \
  --header 'Authorization: REPLACE_KEY_VALUE' \
  --header 'content-type: multipart/form-data' \
  --form file=string \
  --form explain=false \
  --form url=string

Response samples

Content type
application/json
{
  • "report_id": "123456789",
  • "success": true
}

Get result of AI Generated Image detection task

Polling request for the result of a created AIGeneratedImage task.

Authorizations:
Bearer
path Parameters
id
required
string

report id

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai/tasks/ai_generated_image_detection/{id} \
  --header 'Authorization: REPLACE_KEY_VALUE'

Response samples

Content type
application/json
{
  • "error": "error",
  • "event_type": "event_type",
  • "id": "123456789",
  • "result": {
    },
  • "status": "completed"
}

Voice analysis

This analysis detects AI-generated voices and voice cloning in media. It also provides transcription and translation to English.

Supported input files: audio / video

Create a Voice analysis task

Authorizations:
Bearer
Request Body schema:
explain
boolean
Default: false

enable explanation of prediction

file
string <binary>

audio file

transcribe
boolean
Default: true

enable speech-to-text

url
string

(Experimental) media source url

Responses

Request samples

curl --request POST \
  --url https://api.sensity.ai/tasks/voice_analysis \
  --header 'Authorization: REPLACE_KEY_VALUE' \
  --header 'content-type: application/x-www-form-urlencoded' \
  --data file=string \
  --data explain=false \
  --data transcribe=true \
  --data url=string

Response samples

Content type
application/json
{
  • "report_id": "123456789",
  • "success": true
}

Get result of Voice analysis task

Polling request for the result of a created voice analysis task.

Authorizations:
Bearer
path Parameters
id
required
string

report id

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai/tasks/voice_analysis/{id} \
  --header 'Authorization: REPLACE_KEY_VALUE'

Response samples

Content type
application/json
{
  • "error": "error",
  • "event_type": "event_type",
  • "id": "c445f38a-f404-48c8-9054-31b289baa685",
  • "result": {
    },
  • "status": "completed"
}

Forensic analysis

This service uses forensic analysis techniques to determine whether the file was digitally created or manipulated. In particular, the test will find traces of suspect software editing, use of screenshot software, file dates mismatch, multiple versions of the files, and more.

Supported input files: image / video / pdf / audio

Create a Forensic analysis task

Create a Forensic analysis task to check the given media file or document.

Authorizations:
Bearer
Request Body schema: multipart/form-data
additional_info
string

Additional information for report

file
string <binary>

image / video / pdf / audio

is_extract_images
boolean
Default: false

request to send extracted images from pdf

url
string

(Experimental) media source url

Responses

Request samples

curl --request POST \
  --url https://api.sensity.ai/tasks/forensic_analysis \
  --header 'Authorization: REPLACE_KEY_VALUE' \
  --header 'content-type: multipart/form-data' \
  --form file=string \
  --form is_extract_images=false \
  --form additional_info=string \
  --form url=string

Response samples

Content type
application/json
{
  • "report_id": "123456789",
  • "success": true
}

Get result of Forensic analysis task

Polling request for the result of a created Forensic analysis task.

Authorizations:
Bearer
path Parameters
id
required
string

report id

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai/tasks/forensic_analysis/{id} \
  --header 'Authorization: REPLACE_KEY_VALUE'

Response samples

Content type
application/json
{
  • "additional_info": "additional_info",
  • "error": "error",
  • "event_type": "event_type",
  • "id": "123456789",
  • "result": {
    },
  • "status": "completed"
}

Webhooks

Webhook management

Delete all webhook URLs

Delete all webhook URLs assigned to an event.

Authorizations:
Bearer
path Parameters
event
required
string
Enum: "ai_generated_image_detection_analysis_complete" "data_extraction_analysis_complete" "face_manipulation_analysis_complete" "face_matching_analysis_complete" "forensic_analysis_complete" "id_document_authentication_analysis_complete" "liveness_detection_analysis_complete" "voice_analysis_complete"

Event Name

Responses

Request samples

curl --request DELETE \
  --url https://api.sensity.ai//webhooks/{event} \
  --header 'Authorization: REPLACE_KEY_VALUE'

Response samples

Content type
application/json
{
  • "success": true
}

Get a list of webhook URLs

Get a list of webhook URLs assigned to a particular event.

Authorizations:
Bearer
path Parameters
event
required
string
Enum: "ai_generated_image_detection_analysis_complete" "data_extraction_analysis_complete" "face_manipulation_analysis_complete" "face_matching_analysis_complete" "forensic_analysis_complete" "id_document_authentication_analysis_complete" "liveness_detection_analysis_complete" "voice_analysis_complete"

Event Name

Responses

Request samples

curl --request GET \
  --url https://api.sensity.ai//webhooks/{event} \
  --header 'Authorization: REPLACE_KEY_VALUE'

Response samples

Content type
application/json
{
  • "success": true,
  • "urls": [
    ]
}

Assign an event to a webhook URL

Assign an event to a URL to which a request will be sent when that event occurs.

Authorizations:
Bearer
path Parameters
event
required
string
Enum: "ai_generated_image_detection_analysis_complete" "data_extraction_analysis_complete" "face_manipulation_analysis_complete" "face_matching_analysis_complete" "forensic_analysis_complete" "id_document_authentication_analysis_complete" "liveness_detection_analysis_complete" "voice_analysis_complete"

Event Name

Request Body schema: application/x-www-form-urlencoded
url
required
string

Webhook URL

Responses

Request samples

curl --request POST \
  --url https://api.sensity.ai//webhooks/{event} \
  --header 'Authorization: REPLACE_KEY_VALUE' \
  --header 'content-type: application/x-www-form-urlencoded' \
  --data url=string

Response samples

Content type
application/json
{
  • "id": "id",
  • "success": true
}

Delete a webhook URL

Delete a webhook URL that is assigned to an event.

Authorizations:
Bearer
path Parameters
id
required
string

Webhook ID

event
required
string
Enum: "ai_generated_image_detection_analysis_complete" "data_extraction_analysis_complete" "face_manipulation_analysis_complete" "face_matching_analysis_complete" "forensic_analysis_complete" "id_document_authentication_analysis_complete" "liveness_detection_analysis_complete" "voice_analysis_complete"

Event Name

Responses

Request samples

curl --request DELETE \
  --url https://api.sensity.ai//webhooks/{event}/%7Bid%7D \
  --header 'Authorization: REPLACE_KEY_VALUE'

Response samples

Content type
application/json
{
  • "success": true
}

Models

Face manipulation

class_name
string
Enum: "real" "fake" "no_faces"

label attributed to image / video

class_probability
number [ 0 .. 1 ]

confidence score of the label attribution

Array of objects (entities.FaceManipulationResultExplanation)

face manipulation explanation

manipulation_type
string
Enum: "faceswap" "face_reenactment" "lipsync"

manipulation type label attributed to image / video

{
  • "class_name": "real",
  • "class_probability": 1,
  • "explanation": [
    ],
  • "manipulation_type": "lipsync"
}

AI Generated Image detection

class_name
string
Enum: "real" "fake"

image label (real or fake)

class_probability
number [ 0 .. 1 ]

confidence score of the label attributed to the image

object (entities.AIGeneratedImageDetectionExplanation)
model_attribution
string
Enum: "stylegan" "stylegan2" "stylegan3" "generated_photos" "midjourney" "dalle-2" "stable-diffusion" "glide" "firefly" "blue-willow" "unstable-diffusion" "stable-dream"

prediction of the model used to generate the image, if classified as fake

model_attribution_probability
number [ 0 .. 1 ]

confidence score of the model attribution

{
  • "class_name": "fake",
  • "class_probability": 0.999,
  • "explanation": {
    },
  • "model_attribution": "stylegan",
  • "model_attribution_probability": 0.9
}

Voice analysis

error
string

Error message

event_type
string
id
string

Identifier of the task

object (entities.VoiceAnalysisResult)
status
string
Enum: "in_progress" "completed" "failed"

Status of the current task

  • in_progress - Analysis in progress
  • completed - Task finished successfully
  • failed - Task finished with errors
{
  • "error": "error",
  • "event_type": "event_type",
  • "id": "c445f38a-f404-48c8-9054-31b289baa685",
  • "result": {
    },
  • "status": "completed"
}

Forensic analysis

object (DocumentDetails)
extracted_images
Array of strings <byte>

Contains the extracted images found in a PDF.

Note: only available for PDF files and if is_extract_images is set to true in the previous request.

object (RedFlags)

Contains different warning messages that informs you if the file was modified or corrupted. Some of the fields are only available for some type of files and won't appear if the warning is not triggered.

object (TriggerValues)
{
  • "document_details": {
    },
  • "extracted_images": [
    ],
  • "red_flags": {
    },
  • "trigger_values": {
    }
}