Wasabi S3 Configuration
Using V7's external Wasabi S3 integration, you can keep your data stored within a private bucket. Check out the diagram here to see how it works, and if you're ready to get started follow our step-by-step instructions below:
The Wasabi integration is available on V7's Business plan and above. You can find out more about what each plan includes on our pricing page.
Read-write & Read-only access
Bucket Name Restrictions
Bucket names containing dots (.) will not work due to how Wasabi handles virtual-host-style HTTPS. More details is available here.
You have the choice of integrating your bucket in either a read-write or read-only fashion. At a high level the differences are:
- Read-write allows V7 read & write access to your bucket. This is necessary to generate image thumbnails and extract frames from video files. Thumbnails and frames are written back to your bucket in a predictable structure at a location of your choice
- Read-only restricts V7 to only being able to read data from your bucket. In this scenario, you'll have to pre-extract thumbnails and / or video frames as necessary and make them available in your S3 bucket. More details about this are available here.
1: S3 Bucket Policy
The first step is to grant bucket access to V7's Wasabi user:
arn:aws:iam::100000274636:user/darwin
Access can be granted as read-write or read-only. In both cases, you'll need to replace your-s3-bucket-name
with the exact name of your S3 bucket. If you already have a policy for your bucket, then you only need to add the section under Statement
.
Wasabi policy type
Please make sure that the above mentioned policies are added directly to your S3 bucket (resource-based policy) not via IAM (role-based policy). In case of doubt we recommend following this Wasabi guide.
Read-write policy:
{
"Version": "2012-10-17",
"Id": "PolicyForExternalAccess",
"Statement": [
{
"Sid": "DarwinAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::100000274636:user/darwin"
},
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "arn:aws:s3:::your-s3-bucket-name/*"
}
]
}
Read-only policy:
{
"Version": "2012-10-17",
"Id": "PolicyForExternalAccess",
"Statement": [
{
"Sid": "DarwinAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::100000274636:user/darwin"
},
"Action": ["s3:GetObject"],
"Resource": "arn:aws:s3:::your-s3-bucket-name/*"
}
]
}
Partial Bucket Access
You can grant V7 access to specific sub-folders in your bucket. To do this, simply adjust
Resource
to specify all directories of the subfolder of your choice. For example:your-s3-bucket-name/my_subfolder/*
Note that if you're using a read-write configuration, then you'll need to grant this access to the location you choose V7 to write image thumbnails and video frames back to.
Required Permissions for MIRAX Files
To register MIRAX files (
.mrxs
), you will also need to add thes3:ListBucket
permission to policy outlined above. This is because.mrxs
files come with a folder with a matching name which contains additional image data across multiple files - Our platform needs to be able to list all those files to pull them correctly before processing.
2: Activation
Finally, to activate your external storage, log into Darwin and navigate to Settings > Storage > New Storage Integration. Populate all relevant fields and select Save:
- Storage provider: Wasabi
- Name: The name you will refer to your S3 bucket connection as. This will be the name you use when registering external items. We strongly recommend setting it the same as your S3 bucket name
- S3 Bucket: The exact name of your S3 bucket. For example if your bucket is
s3://bucket-name
, this should be set tobucket-name
- S3 Prefix: If using read-write storage, an optional directory in your bucket where image thumbnails and video frames will be written to. If left blank, they will be written to the base of your bucket under
/data
- S3 Region: The Wasabi region that your S3 bucket sits in
Additional Storage Integrations
If your subscription includes additional storage integrations, these can be added by going to your Settings > Storage and adding the details above to a New Storage Integration.
These can be added without speaking to our Support team although we encourage you to speak with us if you have any questions.
FAQs
-
What is the expiration time on a pre-signed URL?
Any signed URL, whether it's a read or write request expires after at most 4 hours, however, we do cache URLs on our backend so our API can sometimes return cached URLs making the expiration time lower. -
Is V7labs doing some automatic renewal at your side to keep the urls valid?
The renewal is not automatic, however, and requires the user to call the API again in order to renew the signature so that the URL can be accessible again. So in this case, it would require some intervention from the user if they'd like the URL to be accessed after the expiration. -
When users register the images to V7labs to see them on the platform to annotate them (not direct upload to V7 S3, but a workaround using the pre-signed urls), do the users see the images and have the possibility to annotate them for as long as they want until they delete them and not just for 4 hours?
Each time a URL expires, the refresh is invoked by a user to create a new signature through our backend services. Our UI uses the APIs invoked by the user pretty much on each page load. So when entering the workview, for example, our backend sends the browser a fresh signature that works for 4 hours. On the next page load, the user would receive another set of freshly signed URLs. With that in mind, there shouldn't be any issue for any team members to access files inside the platform that are registered via external storage.
If you encounter any issues or have any questions feel free to contact us at [email protected]
Updated 7 months ago