Model Inferences
Inferences are how you request model predictions of annotations on your source files
Last updated
Inferences are how you request model predictions of annotations on your source files
Last updated
All model inference requests are asynchronous, meaning you must make the request and then poll for the status.
POST
https://api.annolab.ai/v1/infer/batch
Creates an model inference job to be run on one or more source files
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
GET
https://api.annolab.ai/v1/infer/batch/{job_id}
Returns the status of the inference job
This code shows how to call a specific model on 2 sources, poll status until the model inference is complete, then retrieve the results.
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Authorization*
string
Where you put your api key. Requesting inferences requires a "Model Run" permission on the project where sources exist
{"Authorization": "Api-Key XXXXXXX-XXXXXXX-XXXXXXX"}
projectIdentifier*
string
Identifier of the project that contains the source files. Either id or the unique name of the project.
modelIdentifier*
string
Identifier of the model that will be run. Either id or the unique name of the model.
sourceIds*
array
array of source ids pointing to where models will be run
outputLayerIdentifier*
string|integer
layer in which predictions will be generated
groupName*
String
name of the group user belongs to
jobId*
Integer
Integer representing the inference job that was spawned
Authorization*
string
Where you put your api key. Requesting inferences requires a "Model Run" permission on the project where sources exist
{"Authorization": "Api-Key XXXXXXX-XXXXXXX-XXXXXXX"}