Model Inferences
Inferences are how you request model predictions of annotations on your source files
All model inference requests are asynchronous, meaning you must make the request and then poll for the status.
Request Inference
POST
https://api.annolab.ai/v1/infer/batch
Creates an model inference job to be run on one or more source files
Headers
Authorization*
string
Where you put your api key. Requesting inferences requires a "Model Run" permission on the project where sources exist
{"Authorization": "Api-Key XXXXXXX-XXXXXXX-XXXXXXX"}
Request Body
projectIdentifier*
string
Identifier of the project that contains the source files. Either id or the unique name of the project.
modelIdentifier*
string
Identifier of the model that will be run. Either id or the unique name of the model.
sourceIds*
array
array of source ids pointing to where models will be run
outputLayerIdentifier*
string|integer
layer in which predictions will be generated
groupName*
String
name of the group user belongs to
Request Inference Status
GET
https://api.annolab.ai/v1/infer/batch/{job_id}
Returns the status of the inference job
Path Parameters
jobId*
Integer
Integer representing the inference job that was spawned
Headers
Authorization*
string
Where you put your api key. Requesting inferences requires a "Model Run" permission on the project where sources exist
{"Authorization": "Api-Key XXXXXXX-XXXXXXX-XXXXXXX"}
This code shows how to call a specific model on 2 sources, poll status until the model inference is complete, then retrieve the results.
Last updated