Google Cloud Storage configuration

Using V7's external GCP integration, you can keep your data stored within a private Google Cloud Storage bucket. Check out the diagram here to see how it works, and if you're ready to get started follow our step-by-step instructions to create the integration.


The GCP integration is available on V7's Business and Enterprise plans. You can find out more about what each plan includes on our pricing page.

Read / Write access

To setup an external GCP account we first need to give our GCP Service Account ([email protected]) access:

  • Read via storage.objects.get
  • Write via storage.objects.create and storage.objects.delete (optional)
# gsutil command for read/write
gsutil iam ch serviceAccount:[email protected]:objectAdmin gs://external-bucket-name

If you don't need Darwin to process images after they are uploaded (e.g. generate thumbnails, split video frames etc), then you can leave out the Write access via storage.objects.create and storage.objects.delete.

# gsutil command for read only
gsutil iam ch serviceAccount:[email protected]:objectViewer gs://external-bucket-name

CORS access

When annotators are requesting images to annotate, they will load them directly from your GCP bucket via a presigned url. However since that GCP bucket sits on a different domain than a CORS header needs to be configured.

This can be configured only with gsutil or GCP APIs only:

cat > gcp_cors.json<< EOF
      "origin": [""],
      "method": ["GET", "PUT"],
      "responseHeader": ["*"],
      "maxAgeSeconds": 3600

gsutil cors set gcp_cors.json gs://external-storage-bucket


When this is all setup, please message [email protected] with the following details:

  • GCP bucket name
  • an optional prefix where we can upload thumbnails if needed (often /darwin/ )
  • your team name
    And we will turn on the external access for your team.

If you encounter any issues or have any questions feel free to contact us at [email protected]