The steps in this guide briefly describes how to integrate Sensity API with your app and backend. To create verification tasks, receive notifications and retrieve the results of the analysis follow the next steps:
To get started with Sensity, please send your request for creating a developer account via the contact form on www.sensity.ai.
Once you have a Sensity developer account, request an API token to the /auth/tokens API endpoint with your credentials using HTTP BasicAuth.
You will receive a JSON object containing an authorization Bearer token that you can use in the SDKs to make subsequent calls to the API.
We use webhooks to notify your backend when an event happens. Webhooks are useful to trigger actions for asynchronous events like when a verification task completes successfully or not. This is best described in the Webhooks section.
Use the Sensity API via our SDKs to retrieve specific data from the already completed verification tasks.
You can find more details on the endpoints and the response models in the API section of the documentation.
Sensity's API provides the following solutions that describe standard use cases. Each solution is composed of different services, each with its endpoint. API's users can serve customized use cases by grouping multiple services in their user flow.
Performs an analysis designed to recognize the latest AI-based image, video and audio manipulation and generation.
This solution is composed by:
The above services accept media files, such as images, videos, and audio, as well as URLs as input.
Input Type | Supported Formats |
---|---|
Image | jpg, jpeg, png, tiff, gif, webp, jfif |
Video | mkv, flv, mov, mp4, avi, webm, 3gpp |
Audio | wav, mp3, m4a, ogg, aac, flac |
Limit | Max Value |
---|---|
File size | 32 MB |
Video duration | 30 min |
Video resolution | 2560p |
Audio duration | 20 min |
Limit | Max Value |
---|---|
Video duration | 30 min |
Audio duration | 20 min |
URL source | Youtube, Instagram, Facebook, Tiktok, Twitter, Dailymotion |
The above services also accept URLs pointing to supported media files. For
example, https://www.example.com/image.jpg
,
https://www.example.com/video.mp4
or https://www.example.com/audio.wav
.
This section shows how to use the Sensity API endpoints to create and retrieve the results of a verification task using the provided endpoints. The guide assumes you have followed the steps in Initial setup and already have a developer account with an API token ready to use.
To create a specific analysis task you can send a POST
request with
/tasks/{task_name}
and the task parameters.
You will receive a JSON response with the following fields:
{
"report_id": "123456789",
"success": true
}
There are two ways to retrieve the results from the requested analysis:
Due to the asynchronous nature of the API, we strongly recommend using Webhooks.
To get the result of a task, make a GET
request to
/tasks/{task_name}/{report_id}
.
You will receive a JSON object with a status
field indicating the status of
the task.
{
"status": "message"
// ... other task related fields
}
The possible message value for status is one of:
completed: the other fields in the response will be populated with the analysis result.
in_progress: the analysis is still running. Try to call the endpoint later to get the result.
failed: something went wrong. Try to re-create the task and ask support if there are subsequent issues.
In this approach, the user associates a custom user-provided endpoint to an API analysis event. Once the event occurs, the API takes care of sending the result of the event to the endpoint.
How to set up a webhook server is discussed in the Webhooks server side SDK section.
Once a verification task is created by a user, Sensity starts processing the data asynchronously. As the processing happens at a later time and not in the response to your code execution. We need a mechanism to notify your system about status changes of the verification task.
This is where webhook comes in place, as it can notify your system once a verification task is processed.
A Webhook is nothing else than a simple user-provided POST
endpoint of a
server. The server should support:
POST
requests in application/json
type.
If the server runs behind a firewall you should whitelist our IP address to allow incoming traffic from our servers. Get in contact with Sensity support for more information.
You can use a service like webhook.site to test receiving webhook events. Or a tool such as ngrok to forward the requests to your local development server.
Sensity webbook service supports the following events:
face_manipulation_analysis_complete
ai_generated_image_detection_analysis_complete
voice_analysis_complete
forensic_analysis_complete
When a verification task is done your server will receive an event with the
resulting data of the task. An explanation of each event response is described
in the GET
endpoints response of the API reference and
Models reference.
To register a webhook, make a POST
request to
/webhooks/{event_name} using the name of the event and
the URL you want to register.
Once the analysis has finished, a POST
request will be sent by the API to the
registered URL with the analysis result.
Note: if no webhook URL is registered before the task analysis is requested, Sensity API will retry to send the result message for a total of 10 retries. After the 10th consecutive failure, the result can only be retrieved through polling.
This section shows you how to create a express
nodejs server and listen to
incoming webhook events. This assumes you have already
Register a webhook in the previous section.
npm install express cors body-parser
Add the following code to index.js
. This creates a server listening to
incoming requests at http://localhost:3030/webhook
.
const express = require("express");
const cors = require("cors");
const bodyParser = require("body-parser");
const app = express();
const port = 3000;
app.use(bodyParser.json());
app.use(cors());
app.post("/webhook", (req, res) => {
// do logic here
console.log(req.body);
res.status(200).send("Success");
});
app.listen(port, () => {
console.log(`Listening at http://localhost:${port}`);
});
Run the express
server with:
node index.js
And then create a task. You can do this with CURL
as its explained in the
API Reference examples or using one of the
Client-side SDKs.
As soon as the verification task is completed you should see the results in the terminal output.
Listening at http://localhost:3000
{
id: '808264ae-bd79-457e-89cc-b643e4f55517',
status: 'completed',
result: { live: 'Live' }
}
Delete all the previously created authorization tokens for the user.
curl --request DELETE \ --url https://api.sensity.ai//auth/tokens \ --header 'Authorization: Basic REPLACE_BASIC_AUTH'
{- "deleted_tokens": [
- "deleted_tokens",
- "deleted_tokens"
], - "success": true
}
Get all authorization JSON Web Tokens (JWT) created for the user.
curl --request GET \ --url https://api.sensity.ai//auth/tokens \ --header 'Authorization: Basic REPLACE_BASIC_AUTH'
{- "claims": {
- "key": {
- "audience": "audience",
- "endsWith": "endsWith",
- "expiresAt": 0,
- "issuedAt": 6,
- "issuer": "issuer",
- "subject": "subject"
}
}, - "success": true
}
Generate an authorization JSON Web Token (JWT) to access the API.
curl --request POST \ --url https://api.sensity.ai//auth/tokens \ --header 'Authorization: Basic REPLACE_BASIC_AUTH'
{- "claims": {
- "audience": "audience",
- "endsWith": "endsWith",
- "expiresAt": 0,
- "issuedAt": 6,
- "issuer": "issuer",
- "subject": "subject"
}, - "id": "id",
- "success": true,
- "token": "token"
}
Delete a specific authorization JSON Web Token (JWT).
id required | string Token ID |
curl --request DELETE \ --url https://api.sensity.ai//auth/tokens/{id} \ --header 'Authorization: Basic REPLACE_BASIC_AUTH'
{- "deleted_tokens": [
- "deleted_tokens",
- "deleted_tokens"
], - "success": true
}
Get a specific authorization JSON Web Token (JWT) to access the API.
id required | string Token ID |
curl --request GET \ --url https://api.sensity.ai//auth/tokens/{id} \ --header 'Authorization: Basic REPLACE_BASIC_AUTH'
{- "claims": {
- "key": {
- "audience": "audience",
- "endsWith": "endsWith",
- "expiresAt": 0,
- "issuedAt": 6,
- "issuer": "issuer",
- "subject": "subject"
}
}, - "success": true
}
This analysis detects potential AI manipulation of faces present in images and videos, as in the case of face swaps, face reenactment and lipsync.
Supported input files: image / video
Create a task to analyze a media file for deepfake manipulations in the region of the face.
explain | boolean Default: false enable explanation of prediction; available only if input is predicted as fake |
file | string <binary> image / video file |
url | string (Experimental) media source url |
curl --request POST \ --url https://api.sensity.ai/tasks/face_manipulation \ --header 'Authorization: REPLACE_KEY_VALUE' \ --header 'content-type: multipart/form-data' \ --form file=string \ --form explain=false \ --form url=string
{- "report_id": "123456789",
- "success": true
}
Polling request for the result of a created face manipulation task.
id required | string report id |
curl --request GET \ --url https://api.sensity.ai/tasks/face_manipulation/{id} \ --header 'Authorization: REPLACE_KEY_VALUE'
{- "error": "error",
- "event_type": "event_type",
- "id": "123456789",
- "preview": "preview",
- "result": {
- "class_name": "real",
- "class_probability": 1,
- "explanation": [
- {
- "bbox": [
- 0,
- 0
], - "bbox_average_size": [
- 6,
- 6
], - "class_name": "real",
- "class_probability": 1,
- "face": "face",
- "face_heatmap": "face_heatmap",
- "frame_idx": 1,
- "frame_ms": 5.962133916683182,
- "id": 5,
- "tracks": [
- {
- "bbox": [
- 2,
- 2
], - "frame_idx": 7,
- "frame_ms": 9.301444243932576,
- "frame_rate": 3.616076749251911
}, - {
- "bbox": [
- 2,
- 2
], - "frame_idx": 7,
- "frame_ms": 9.301444243932576,
- "frame_rate": 3.616076749251911
}
]
}, - {
- "bbox": [
- 0,
- 0
], - "bbox_average_size": [
- 6,
- 6
], - "class_name": "real",
- "class_probability": 1,
- "face": "face",
- "face_heatmap": "face_heatmap",
- "frame_idx": 1,
- "frame_ms": 5.962133916683182,
- "id": 5,
- "tracks": [
- {
- "bbox": [
- 2,
- 2
], - "frame_idx": 7,
- "frame_ms": 9.301444243932576,
- "frame_rate": 3.616076749251911
}, - {
- "bbox": [
- 2,
- 2
], - "frame_idx": 7,
- "frame_ms": 9.301444243932576,
- "frame_rate": 3.616076749251911
}
]
}
], - "manipulation_type": "lipsync"
}, - "status": "completed"
}
This analysis detects AI-generated photo-realistic images and video. Images may be created for example by Generative Adversarial Networks (GAN) and Diffusion Models (DM) like Stable Diffusion, MidJourney, Dalle-2 and others. Videos may be made with tools such Runway, Sora, Luma, Pika, Kling and others. An optional explanation of prediction is provided for some classes of model generators and if input is predicted as fake.
Supported input files: image / video
The task analyses an image to check if it is generated with AI.
explain | boolean Default: false enable explanation of prediction; available only for some classes of model generators and if input is predicted as fake |
file | string <binary> image / video file |
url | string (Experimental) media source url |
curl --request POST \ --url https://api.sensity.ai/tasks/ai_generated_image_detection \ --header 'Authorization: REPLACE_KEY_VALUE' \ --header 'content-type: multipart/form-data' \ --form file=string \ --form explain=false \ --form url=string
{- "report_id": "123456789",
- "success": true
}
Polling request for the result of a created AIGeneratedImage task.
id required | string report id |
curl --request GET \ --url https://api.sensity.ai/tasks/ai_generated_image_detection/{id} \ --header 'Authorization: REPLACE_KEY_VALUE'
{- "error": "error",
- "event_type": "event_type",
- "id": "123456789",
- "result": {
- "class_name": "fake",
- "class_probability": 0.999,
- "explanation": {
- "eye_mouth_overlays_image": "eye_mouth_overlays_image",
- "heatmap": "heatmap",
- "heatmap_object": "heatmap_object"
}, - "model_attribution": "stylegan",
- "model_attribution_probability": 0.9
}, - "status": "completed"
}
This analysis detects AI-generated voices and voice cloning in media. It also provides transcription and translation to English.
Supported input files: audio / video
explain | boolean Default: false enable explanation of prediction |
file | string <binary> audio file |
transcribe | boolean Default: true enable speech-to-text |
url | string (Experimental) media source url |
curl --request POST \ --url https://api.sensity.ai/tasks/voice_analysis \ --header 'Authorization: REPLACE_KEY_VALUE' \ --header 'content-type: application/x-www-form-urlencoded' \ --data file=string \ --data explain=false \ --data transcribe=true \ --data url=string
{- "report_id": "123456789",
- "success": true
}
Polling request for the result of a created voice analysis task.
id required | string report id |
curl --request GET \ --url https://api.sensity.ai/tasks/voice_analysis/{id} \ --header 'Authorization: REPLACE_KEY_VALUE'
{- "error": "error",
- "event_type": "event_type",
- "id": "c445f38a-f404-48c8-9054-31b289baa685",
- "result": {
- "class_name": "real",
- "class_probability": 1,
- "explanation": [
- {
- "class_name": "real",
- "class_probability": 1,
- "segments": [
- {
- "end": 0.8008281904610115,
- "start": 6.027456183070403
}, - {
- "end": 0.8008281904610115,
- "start": 6.027456183070403
}
], - "speaker": "speaker"
}, - {
- "class_name": "real",
- "class_probability": 1,
- "segments": [
- {
- "end": 0.8008281904610115,
- "start": 6.027456183070403
}, - {
- "end": 0.8008281904610115,
- "start": 6.027456183070403
}
], - "speaker": "speaker"
}
], - "transcript": "{}"
}, - "status": "completed"
}
This service uses forensic analysis techniques to determine whether the file was digitally created or manipulated. In particular, the test will find traces of suspect software editing, use of screenshot software, file dates mismatch, multiple versions of the files, and more.
Supported input files: image / video / pdf / audio
Create a Forensic analysis task to check the given media file or document.
additional_info | string Additional information for report |
file | string <binary> image / video / pdf / audio |
is_extract_images | boolean Default: false request to send extracted images from pdf |
url | string (Experimental) media source url |
curl --request POST \ --url https://api.sensity.ai/tasks/forensic_analysis \ --header 'Authorization: REPLACE_KEY_VALUE' \ --header 'content-type: multipart/form-data' \ --form file=string \ --form is_extract_images=false \ --form additional_info=string \ --form url=string
{- "report_id": "123456789",
- "success": true
}
Polling request for the result of a created Forensic analysis task.
id required | string report id |
curl --request GET \ --url https://api.sensity.ai/tasks/forensic_analysis/{id} \ --header 'Authorization: REPLACE_KEY_VALUE'
{- "additional_info": "additional_info",
- "error": "error",
- "event_type": "event_type",
- "id": "123456789",
- "result": {
- "document_details": {
- "GPS_location": "2 North Bullard St, New York, NY 10013, USA",
- "digital_signature_by": "Signer",
- "password_protected": false,
- "source": "Physical"
}, - "extracted_images": [
- "extracted_images",
- "extracted_images"
], - "red_flags": {
- "AI_GENERATED": "AI generated content detected",
- "FILE_EDITED": "The file was edited after creation",
- "METADATA_ERROR": "The file metadata was edited or deleted",
- "SCREENSHOT_SOFTWARE": "Suspect software used for screenshot",
- "SUSPECT_SOFTWARE": "Critical software creator or editor",
- "SUSPECT_SOFTWARE_": "Suspect software creator or editor"
}, - "trigger_values": {
- "AI_GENERATED": [
- "AI generated content detected"
], - "FILE_EDITED": [
- "The file was edited after creation"
], - "METADATA_ERROR": [
- "The file metadata was edited or deleted"
], - "SCREENSHOT_SOFTWARE": [
- "Suspect software used for screenshot"
], - "SUSPECT_SOFTWARE": [
- "Critical software creator or editor"
], - "SUSPECT_SOFTWARE_": [
- "Suspect software creator or editor"
]
}
}, - "status": "completed"
}
Delete all webhook URLs assigned to an event.
event required | string Enum: "ai_generated_image_detection_analysis_complete" "data_extraction_analysis_complete" "face_manipulation_analysis_complete" "face_matching_analysis_complete" "forensic_analysis_complete" "id_document_authentication_analysis_complete" "liveness_detection_analysis_complete" "voice_analysis_complete" Event Name |
curl --request DELETE \ --url https://api.sensity.ai//webhooks/{event} \ --header 'Authorization: REPLACE_KEY_VALUE'
{- "success": true
}
Get a list of webhook URLs assigned to a particular event.
event required | string Enum: "ai_generated_image_detection_analysis_complete" "data_extraction_analysis_complete" "face_manipulation_analysis_complete" "face_matching_analysis_complete" "forensic_analysis_complete" "id_document_authentication_analysis_complete" "liveness_detection_analysis_complete" "voice_analysis_complete" Event Name |
curl --request GET \ --url https://api.sensity.ai//webhooks/{event} \ --header 'Authorization: REPLACE_KEY_VALUE'
{- "success": true,
- "urls": [
- {
- "id": "id",
- "url": "url"
}, - {
- "id": "id",
- "url": "url"
}
]
}
Assign an event to a URL to which a request will be sent when that event occurs.
event required | string Enum: "ai_generated_image_detection_analysis_complete" "data_extraction_analysis_complete" "face_manipulation_analysis_complete" "face_matching_analysis_complete" "forensic_analysis_complete" "id_document_authentication_analysis_complete" "liveness_detection_analysis_complete" "voice_analysis_complete" Event Name |
url required | string Webhook URL |
curl --request POST \ --url https://api.sensity.ai//webhooks/{event} \ --header 'Authorization: REPLACE_KEY_VALUE' \ --header 'content-type: application/x-www-form-urlencoded' \ --data url=string
{- "id": "id",
- "success": true
}
Delete a webhook URL that is assigned to an event.
id required | string Webhook ID |
event required | string Enum: "ai_generated_image_detection_analysis_complete" "data_extraction_analysis_complete" "face_manipulation_analysis_complete" "face_matching_analysis_complete" "forensic_analysis_complete" "id_document_authentication_analysis_complete" "liveness_detection_analysis_complete" "voice_analysis_complete" Event Name |
curl --request DELETE \ --url https://api.sensity.ai//webhooks/{event}/%7Bid%7D \ --header 'Authorization: REPLACE_KEY_VALUE'
{- "success": true
}
class_name | string Enum: "real" "fake" "no_faces" label attributed to image / video |
class_probability | number [ 0 .. 1 ] confidence score of the label attribution |
Array of objects (entities.FaceManipulationResultExplanation) face manipulation explanation | |
manipulation_type | string Enum: "faceswap" "face_reenactment" "lipsync" manipulation type label attributed to image / video |
{- "class_name": "real",
- "class_probability": 1,
- "explanation": [
- {
- "bbox": [
- 0,
- 0
], - "bbox_average_size": [
- 6,
- 6
], - "class_name": "real",
- "class_probability": 1,
- "face": "face",
- "face_heatmap": "face_heatmap",
- "frame_idx": 1,
- "frame_ms": 5.962133916683182,
- "id": 5,
- "tracks": [
- {
- "bbox": [
- 2,
- 2
], - "frame_idx": 7,
- "frame_ms": 9.301444243932576,
- "frame_rate": 3.616076749251911
}, - {
- "bbox": [
- 2,
- 2
], - "frame_idx": 7,
- "frame_ms": 9.301444243932576,
- "frame_rate": 3.616076749251911
}
]
}, - {
- "bbox": [
- 0,
- 0
], - "bbox_average_size": [
- 6,
- 6
], - "class_name": "real",
- "class_probability": 1,
- "face": "face",
- "face_heatmap": "face_heatmap",
- "frame_idx": 1,
- "frame_ms": 5.962133916683182,
- "id": 5,
- "tracks": [
- {
- "bbox": [
- 2,
- 2
], - "frame_idx": 7,
- "frame_ms": 9.301444243932576,
- "frame_rate": 3.616076749251911
}, - {
- "bbox": [
- 2,
- 2
], - "frame_idx": 7,
- "frame_ms": 9.301444243932576,
- "frame_rate": 3.616076749251911
}
]
}
], - "manipulation_type": "lipsync"
}
class_name | string Enum: "real" "fake" image label (real or fake) |
class_probability | number [ 0 .. 1 ] confidence score of the label attributed to the image |
object (entities.AIGeneratedImageDetectionExplanation) | |
model_attribution | string Enum: "stylegan" "stylegan2" "stylegan3" "generated_photos" "midjourney" "dalle-2" "stable-diffusion" "glide" "firefly" "blue-willow" "unstable-diffusion" "stable-dream" prediction of the model used to generate the image, if classified as fake |
model_attribution_probability | number [ 0 .. 1 ] confidence score of the model attribution |
{- "class_name": "fake",
- "class_probability": 0.999,
- "explanation": {
- "eye_mouth_overlays_image": "eye_mouth_overlays_image",
- "heatmap": "heatmap",
- "heatmap_object": "heatmap_object"
}, - "model_attribution": "stylegan",
- "model_attribution_probability": 0.9
}
error | string Error message |
event_type | string |
id | string Identifier of the task |
object (entities.VoiceAnalysisResult) | |
status | string Enum: "in_progress" "completed" "failed" Status of the current task
|
{- "error": "error",
- "event_type": "event_type",
- "id": "c445f38a-f404-48c8-9054-31b289baa685",
- "result": {
- "class_name": "real",
- "class_probability": 1,
- "explanation": [
- {
- "class_name": "real",
- "class_probability": 1,
- "segments": [
- {
- "end": 0.8008281904610115,
- "start": 6.027456183070403
}, - {
- "end": 0.8008281904610115,
- "start": 6.027456183070403
}
], - "speaker": "speaker"
}, - {
- "class_name": "real",
- "class_probability": 1,
- "segments": [
- {
- "end": 0.8008281904610115,
- "start": 6.027456183070403
}, - {
- "end": 0.8008281904610115,
- "start": 6.027456183070403
}
], - "speaker": "speaker"
}
], - "transcript": "{}"
}, - "status": "completed"
}
object (DocumentDetails) | |
extracted_images | Array of strings <byte> Contains the extracted images found in a PDF. Note: only available for PDF files and if |
object (RedFlags) Contains different warning messages that informs you if the file was modified or corrupted. Some of the fields are only available for some type of files and won't appear if the warning is not triggered. | |
object (TriggerValues) |
{- "document_details": {
- "GPS_location": "2 North Bullard St, New York, NY 10013, USA",
- "digital_signature_by": "Signer",
- "password_protected": false,
- "source": "Physical"
}, - "extracted_images": [
- "extracted_images",
- "extracted_images"
], - "red_flags": {
- "AI_GENERATED": "AI generated content detected",
- "FILE_EDITED": "The file was edited after creation",
- "METADATA_ERROR": "The file metadata was edited or deleted",
- "SCREENSHOT_SOFTWARE": "Suspect software used for screenshot",
- "SUSPECT_SOFTWARE": "Critical software creator or editor",
- "SUSPECT_SOFTWARE_": "Suspect software creator or editor"
}, - "trigger_values": {
- "AI_GENERATED": [
- "AI generated content detected"
], - "FILE_EDITED": [
- "The file was edited after creation"
], - "METADATA_ERROR": [
- "The file metadata was edited or deleted"
], - "SCREENSHOT_SOFTWARE": [
- "Suspect software used for screenshot"
], - "SUSPECT_SOFTWARE": [
- "Critical software creator or editor"
], - "SUSPECT_SOFTWARE_": [
- "Suspect software creator or editor"
]
}
}