Bring Your Own Model
Guide to connect your own model into V7
V7 allows you to integrate your own models into the platform. These can then be included in the model stage or equipped in auto annotate to pre-label your items. This can significantly reduce human annotation time and allow you to focus on inaccuracies within your model.
These models behave the same as the ones trained using Darwin's own AutoML. Most importantly, they allow you to customize your workflow stages. With this, you can register a model that is exposed via HTTP and manage it the same way you do the ones trained using Darwin.
Minimum requirements
In order for the integration to be successful, your model needs to conform to some specific requirements. It has to:
- Expect
application/json
request payloads - Respond with
application/json
responses - Both request and response payloads must conform to the specific JSON schemas
- Handle the
POST /infer
requests, accepting images as input and responding with the list of results - Handle the
GET /classes
, responding with the list of label types (along with class names) encoded as JSON
BYOM Credit Usage
BYOM uses the same number of credits as other automated actions like webhooks. There is no server cost
Registering the model via the REST API
Check Server is Running
You need to make sure that the server on which your model is hosted on is running and able to listen for requests before the model is registered via the API. For example, with the example model integration shown on the bottom of the page, this code needs to be run before the registration.
It's possible to register the model via the API.
The following snippet from shell shows how to do it:
APIKEY="<your-key-here>"
# the following assumes that you have `jq` installed:
TEAM_ID=$(
curl \
-XGET \
-s \
-H "content-type: application/json" \
-H "authorization: ApiKey $APIKEY" \
"https://darwin.v7labs.com/api/users/token_info" \
| jq ".selected_team.id"
)
# the rest you'll need to specify yourself:
MODEL_NAME="your-model-name"
EXTERNAL_URL="http://your.externalmodel.ai/api"
# the following are optional, feel free to omit them in the
# payload below if not needed:
BASIC_AUTH_USERNAME="some-user-1"
BASIC_AUTH_PASSWORD="some-password-1"
AUTH_SECRET="some-secret-1"
AUTH_SECRET_HEADER="X-MY-AUTH"
# you'll need to have your model deployment respond to the
# mandatory /classes endpoint. here, you'll need to get the
# classes response and pass it into Darwin.
#
# again, feel free to leave out parts of authentication that
# don't apply in your case:
CLASSES=$(
curl \
-XGET \
-H "content-type: application/json" \
-H "authorization: Basic $(echo "$BASIC_AUTH_USERNAME:$BASIC_AUTH_PASSWORD" | base64)" \
-H "$AUTH_SECRET_HEADER: $AUTH_SECRET" \
-s "$EXTERNAL_URL/classes"
)
# now that we have all the parameters, let's actually register
# the model in Darwin:
API_URL="https://darwin.v7labs.com/ai"
curl --request POST \
--header "Content-Type: application/json" \
--header "authorization: ApiKey $APIKEY" \
--url "$API_URL/trained_models" \
--data "{
\"name\": \"$MODEL_NAME\",
\"team_id\": \"$TEAM_ID\",
\"external_url\": \"$EXTERNAL_URL\",
\"basic_auth_username\": \"$BASIC_AUTH_USERNAME\",
\"basic_auth_password\": \"$BASIC_AUTH_PASSWORD\",
\"auth_secret\": \"$AUTH_SECRET\",
\"auth_secret_header\": \"$AUTH_SECRET_HEADER\",
\"is_external\": true,
\"classes\": $CLASSES
}"
API Key Permissions
You need to make sure that the API Key used in registering the model has permission to create models
Security
The form allows you to choose from between two forms of authentication supported currently:
- HTTP Basic Authentication
- Secret Key
Consider using SSL!
You should always use SSL, otherwise, a "man in the middle" attack could be performed, sniffing the username and password or a secret key.
The "secret key" authentication scheme allows you to specify the HTTP header name and its value. As an example, your exposed model could expect requests with an X-Auth
header and some specific, secret value.
The credentials that you provide at this step are always securely stored in Darwin's database and encrypted at rest.
Communication schema
GET {base-url}/classes
When your model is being registered, a GET {base-url}/classes
request is made from Darwin. The response determines the type of annotations and class names that will be available to link in the workflows.
Classes response example
Here’s an example response for some main annotation types:
[
{
"name": "Car",
"type": "bounding_box"
},
{
"name": "Rust",
"type": "polygon"
},
{
"name": "Blurry Image",
"type": "tag"
}
]
The list of supported main annotation types is as follows:
bounding_box
cuboid
ellipse
line
keypoint
polygon
skeleton
tag
POST {base-url}/infer
Whenever Darwin needs to run inference using your model, it makes a POST {base-url}/infer
request. This happens when e. g:
- You run an inference from the model’s playground in the UI
- Your model is used in an "AI Model stage" of the annotation workflow
In the case of the AI Model stage, each eligible image is sent to inference through this POST request. Darwin encapsulates the images or videos within the request payload and expects a response that conforms to a known Darwin JSON derived schema.
If the request times out or fails for whatever reason - the inference moves on to the next item in the dataset. In the case of the AI Model stage, such items would remain at the same state so it’s very easy to filter and see the items that the model failed to respond to.
Request and response schemas
The JSON response is validated using the JSON Schema as specified below.
Inference request JSON schema
$id: https://darwin.v7labs.com/schemas/external-models/inference-request.schema.json
$schema: https://json-schema.org/draft/2020-12/schema
title: Inference request
description: Provides an image or a video to run the inference on
type: object
oneOf:
- required:
- image
- required:
- video
properties:
image:
$schema: https://json-schema.org/draft/2020-12/schema
anyOf:
- required:
- base64
- required:
- url
description: An image to run inference on
properties:
base64:
type: string
url:
type: string
type: object
video:
$schema: https://json-schema.org/draft/2020-12/schema
description: A video to run inference on
oneOf:
- required:
- url
- required:
- frame_urls
properties:
frame_urls:
items:
oneOf:
- required:
- base64
- required:
- url
properties:
base64:
type: string
url:
type: string
type: object
type: array
url:
type: string
type: object
Inference Request Example
Below is an example inference request. If the request is as a part of the Auto Annotate feature then there will be bounding box coordinates as shown. Otherwise, the params will be an empty dictionary.
{
"message": "infer request",
"details": {
"image": {
"url": "<image url>
},
"params": {
"bbox": {
"h": 400,
"w": 300,
"x": 550,
"y": 120
},
"id": "d0dca498-dd0e-4150-b94e-253f06b9caf2",
"image": {
"url": "<image url>"
}
}
}
}
Additionally, images will be sent as a base64 string when they are sent through the model test page:
{
"image":{
"base64": "{base64_image_representation}"
},
"external_model":{
"url": "<model_url>",
"auth":{
"method": "auth_method",
"auth_secret_header": "auth_header",
"basic_auth_username": "auth_username",
"auth_secret": "auth_secret",
"basic_auth_password": "auth_password"
},
"name":"model_name",
"transform":"none"
}
}
Note: The request.json
payload will only include the image
and params
objects.
Inference response JSON schema
$id: https://darwin.v7labs.com/schemas/external-models/inference-response.schema.json
$schema: https://json-schema.org/draft/2020-12/schema
title: Inference response
description: JSON response to an inference request. Encapsulates a list of results
type: object
properties:
results:
items:
properties:
attributes:
$ref: '#/$defs/attributes'
bounding_box:
$ref: '#/$defs/bounding_box'
confidence:
$ref: '#/$defs/confidence'
cuboid:
$ref: '#/$defs/cuboid'
directional_vector:
$ref: '#/$defs/directional_vector'
ellipse:
$ref: '#/$defs/ellipse'
keypoint:
$ref: '#/$defs/keypoint'
line:
$ref: '#/$defs/line'
label:
type: string
name:
type: string
polygon:
$ref: '#/$defs/polygon'
skeleton:
$ref: '#/$defs/skeleton'
tag:
type: object
text:
$ref: '#/$defs/text'
oneOf:
- required:
- bounding_box
- required:
- cuboid
- required:
- directional_vector
- required:
- ellipse
- required:
- line
- required:
- keypoint
- required:
- polygon
- required:
- skeleton
- required:
- tag
required:
- name
- label
- confidence
type: object
type: array
status:
enum:
- succeeded
- failed
type: string
required:
- status
$defs:
attributes:
properties:
attributes:
items:
type: string
required:
- attributes
type: array
bounding_box:
properties:
h:
type: number
minimum: 0
w:
type: number
minimum: 0
x:
type: number
y:
type: number
required:
- h
- w
- x
- y
type: object
confidence:
maximum: 1
minimum: 0
type: number
cuboid:
properties:
back:
$ref: '#/$defs/bounding_box'
front:
$ref: '#/$defs/bounding_box'
required:
- back
- front
type: object
directional_vector:
properties:
angle:
minimum: 0
type: number
length:
minimum: 0
type: number
required:
- angle
- length
type: object
ellipse:
properties:
angle:
minimum: 0
type: number
center:
$ref: '#/$defs/keypoint'
radius:
$ref: '#/$defs/keypoint'
required:
- angle
- center
- radius
type: object
keypoint:
properties:
x:
type: number
y:
type: number
required:
- x
- y
type: object
line:
properties:
path:
items:
$ref: '#/$defs/keypoint'
required:
- path
type: object
polygon:
$ref: '#/$defs/line'
skeleton:
properties:
nodes:
items:
properties:
occluded:
type: boolean
x:
minimum: 0
type: number
y:
minimum: 0
type: number
required:
- occluded
- x
- y
type: object
type: array
required:
- nodes
type: object
text:
properties:
text:
type: string
required:
- text
type: object
Inference Response Example
Below is an example inference response for a complex polygon annotation produced by an external model:
{
"results":[
{
"confidence":1,
"label":"background",
"name":"background",
"polygon":{
"path":[
{
"x":677.0,
"y":896.0
},
{
"x":676.0,
"y":896.0
},
{
"x":675.0,
"y":897.0
},
{
"x":674.0,
"y":897.0
},
{
"x":675.0,
"y":897.0
},
{
"x":676.0,
"y":898.0
},
{
"x":675.0,
"y":899.0
},
{
"x":674.0,
"y":899.0
},
{
"x":671.0,
"y":902.0
}
],
"additional_paths":[
[
{
"x":1,
"y":1
},
{
"x":2,
"y":2
},
{
"x":3,
"y":3
}
]
]
}
}
],
"status":"succeeded"
}
Note: additional_paths
contains other paths in a complex polygon (such as the hole in a donut). It should also be noted that path
is a list whereas additional_paths
is a list of lists.
The example above is indicative of the JSON structure and not a typical annotation.
Only One Main Annotation Type per Inference Object
Please only include one main annotation per inference object in your response. More than one will result in an error.
Model Setup and Mapping Classes
Once the above is complete, you should now enable the model and map your external classes to those in V7.
For auto-annotate the instructions are here.
For the AI model stage the instructions are here.
Example model integration
Here’s a pseudo code example of a “Bring Your Own Model” integration that handles inference requests via HTTP using Flask:
import base64
import tempfile
import urllib
from PIL import Image
from flask import Flask, request, jsonify
from your_model import Model
model = Model()
def resolve_image(image_spec: dict[str, str]) -> Image:
if "url" in image_spec:
with tempfile.NamedTemporaryFile() as file:
urllib.request.urlretrieve(image_spec["url"], file.name)
return Image.open(file.name).convert("RGB")
elif "base64" in image_spec:
with tempfile.NamedTemporaryFile() as file:
file.write(base64.decodebytes(image_spec["base64"].encode("utf-8")))
return Image.open(file.name).convert("RGB")
else:
raise ValueError("Invalid image spec")
@app.route("/api/classes", methods=["GET"])
def classes():
return jsonify(model.classes)
@app.route("/api/infer", methods=["POST"])
def infer():
payload = request.json
try:
image = resolve_image(payload["image"])
class_name, confidence = model(image)
return jsonify(
status="succeeded",
results=[
{
"name": class_name,
"label": class_name,
"confidence": confidence,
"tag": {},
}
],
)
except:
print(traceback.format_exc())
return jsonify(status="failed", results=[])
if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0")
Use with Consensus
Just as with V7 hosted models, you can compare human annotators with your externally hosted model using the Consensus stage.
External Model Latency Requirements
To avoid timeout errors, your model should take less than 30 seconds to complete.
Support with External Frameworks
Please note that with BYOM we do not support any frameworks such as Sagemaker or Vertex AI.
Updated about 1 year ago