Model Inferences

Inferences are how you request model predictions of annotations on your source files

All model inference requests are asynchronous, meaning you must make the request and then poll for the status.

Request Inference

POST https://api.annolab.ai/v1/infer/batch

Creates an model inference job to be run on one or more source files

Headers

Name
Type
Description

Authorization*

string

Where you put your api key. Requesting inferences requires a "Model Run" permission on the project where sources exist {"Authorization": "Api-Key XXXXXXX-XXXXXXX-XXXXXXX"}

Request Body

Name
Type
Description

projectIdentifier*

string

Identifier of the project that contains the source files. Either id or the unique name of the project.

modelIdentifier*

string

Identifier of the model that will be run. Either id or the unique name of the model.

sourceIds*

array

array of source ids pointing to where models will be run

outputLayerIdentifier*

string|integer

layer in which predictions will be generated

groupName*

String

name of the group user belongs to

{
    "inferenceJobId": 12,
    "status": "Queued",
    "projectName": "Sample Project",
    "projectId": 1,
    "outputLayerName": "Gold Set",
    "outputLayerId": 12,
    "sourceIds": [3240, 4414],
}

Request Inference Status

GET https://api.annolab.ai/v1/infer/batch/{job_id}

Returns the status of the inference job

Path Parameters

Name
Type
Description

jobId*

Integer

Integer representing the inference job that was spawned

Headers

Name
Type
Description

Authorization*

string

Where you put your api key. Requesting inferences requires a "Model Run" permission on the project where sources exist {"Authorization": "Api-Key XXXXXXX-XXXXXXX-XXXXXXX"}

{
    "inferenceJobId": 12,
    "status": "Queued",
    "projectId": 1,
    "sourceIds": [3240, 4414]
}

This code shows how to call a specific model on 2 sources, poll status until the model inference is complete, then retrieve the results.

import requests
import time

ANNO_LAB_API_KEY = 'XXXXXXX-XXXXXXX-XXXXXXX-XXXXXXX'

inferenceBody = {
  'groupName': 'Company Name',
  'projectIdentifier': 'My Project',
  'sourceIds': [4024, 5853],
  'modelIdentifier': 'Staple + Classify Documents',
  'outputLayerIdentifier': 'Gold Set'
}

headers = {
  'Authorization': 'Api-Key '+ANNO_LAB_API_KEY,
}

url = 'https://api.annolab.ai/v1/infer/batch'

response = requests.post(url, headers=headers, json=inferenceBody)

print(response.json())

get_url = 'https://api.annolab.ai/v1/infer/batch/'+response.json()['inferenceJobId']
maximum_timeout_seconds = 1800
time_taken = 0 
inference_is_finished = False

start_time = time.time()
while not inference_is_finished and time_taken < maximum_timeout_seconds:
  status_response = requests.get(get_url, headers=headers, json=inferenceBody).json()
  if status_response['status'] in ['Finished', 'Errored']:
    print("Inference Finished")
    print(status_response)
    inference_is_finished = True
  time_taken = time.time() - start_time

Last updated