Object Storage
S3-compatible object storage for files, images, and documents. Declare buckets in catalog-info.yaml — credentials and SDKs are provisioned automatically.
How it works
When you declare storage buckets in catalog-info.yaml, the builder automatically provisions them during each deploy. Here's what happens step by step:
- Builder reads
spec.storagefrom yourcatalog-info.yaml - For each storage entry, it provisions a MinIO bucket and generates dynamic credentials via HashiCorp Vault
- A Vault Agent sidecar is injected into your pod. It writes S3 credentials to
/vault/secrets/storagein env-file format and auto-rotates them on a schedule - The Docker entrypoint sources the Vault file at startup, making the credentials available as environment variables
- The
@insureco/storageSDK reads directly from the Vault file before each operation, so it automatically picks up rotated credentials without a restart
YAML configuration
Add a storage array under spec in your catalog-info.yaml:
spec:
storage:
- name: default
tier: s3-smThe bucket name follows the pattern <service>-<env>-<name>. For example, a service called my-api deployed to production with a storage entry named default creates a bucket called my-api-prod-default.
Multiple buckets
You can declare multiple storage entries to separate concerns:
spec:
storage:
- name: uploads
tier: s3-md
- name: exports
tier: s3-smEach bucket gets its own environment variable. The default bucket uses S3_BUCKET, while named buckets use the pattern S3_<NAME>_BUCKET (e.g., S3_UPLOADS_BUCKET, S3_EXPORTS_BUCKET).
Storage tiers
Each bucket is assigned a tier that determines its capacity and monthly gas cost:
| Tier | Capacity | Gas/Month | USD/Month |
|---|---|---|---|
s3-sm | 1 GB | 200 | $2 |
s3-md | 5 GB | 800 | $8 |
s3-lg | 25 GB | 3,000 | $30 |
s3-xl | 100 GB | 10,000 | $100 |
catalog-info.yaml and redeploying. Data is preserved across tier changes.Credential management
Storage credentials are managed by Vault Agent, which writes them to /vault/secrets/storage inside your pod. The file contains these variables in env-file format:
| Variable | Description | Example |
|---|---|---|
S3_HOST | MinIO server host | 64.23.181.20 |
S3_PORT | MinIO server port | 9000 |
S3_ACCESS_KEY_ID | Dynamic credential (auto-rotated) | (auto-generated) |
S3_SECRET_ACCESS_KEY | Dynamic credential (auto-rotated) | (auto-generated) |
S3_BUCKET | Default bucket name | my-api-prod-default |
S3_UPLOADS_BUCKET | Named bucket (example) | my-api-prod-uploads |
Credentials are scoped to only the buckets declared by your service. They cannot access other services' buckets.
@insureco/storage SDK, this is handled transparently — the SDK reads from the Vault file before each operation and detects rotated credentials. If you use a raw S3 client, you are responsible for re-reading the Vault file when you get an access-denied error.Using the SDK
The @insureco/storage SDK (v0.2.0+) is Vault-native. In production, it reads credentials directly from the Vault secrets file and automatically handles credential rotation. In local development, it falls back to environment variables.
Install
npm install @insureco/storageDefault bucket
import { createStorage } from '@insureco/storage'
// Default bucket — reads from Vault file in production, env vars locally
const storage = createStorage()
// Upload a file
await storage.upload('photos/avatar.jpg', imageBuffer)
// Download a file
const data = await storage.download('photos/avatar.jpg')
// Presigned URL (share without exposing credentials)
const url = await storage.getSignedUrl('photos/avatar.jpg', { expiresIn: 3600 })Named buckets
Pass the bucket name to createStorage to target a specific bucket:
const uploads = createStorage('uploads') // reads S3_UPLOADS_BUCKET
const exports = createStorage('exports') // reads S3_EXPORTS_BUCKET
await uploads.upload('invoices/inv-001.pdf', pdfBuffer)
const report = await exports.download('reports/monthly.csv')Credential resolution order
The SDK resolves credentials in this order:
- Vault file (
/vault/secrets/storage) — used in production pods, supports auto-rotation - Environment variables (
S3_HOST,S3_ACCESS_KEY_ID, etc.) — used for local development
tawa config set for S3 variables. The builder provisions them via Vault automatically. Setting them manually bypasses rotation and will cause access-denied errors when Vault rotates the credentials.SDK API reference
| Method | Returns | Description |
|---|---|---|
upload(key, data, opts?) | void | Upload Buffer, string, or stream |
download(key) | Buffer | Download as Buffer |
downloadStream(key) | Readable | Download as stream (large files) |
delete(keys) | void | Delete one or array of keys |
list(prefix?) | ListItem[] | List objects |
exists(key) | boolean | Check if object exists |
stat(key) | ObjectStat | Get size, etag, lastModified, contentType |
getSignedUrl(key, opts?) | string | Presigned download URL |
getUploadUrl(key, opts?) | string | Presigned upload URL |
Local development
For local development, set the env vars in a .env file and run MinIO in Docker:
# .env (local development)
S3_HOST=127.0.0.1
S3_PORT=9000
S3_ACCESS_KEY_ID=minioadmin
S3_SECRET_ACCESS_KEY=minioadmin
S3_BUCKET=my-app-devRunning MinIO locally
docker run -p 9000:9000 -p 9001:9001 minio/minio server /data --console-address ":9001"The MinIO console is available at http://localhost:9001 where you can browse buckets, upload files, and manage access policies. The default credentials are minioadmin / minioadmin.
S3_* environment variables. Your code works identically in local dev and production — only the credential source changes.Scaffold support
The tawa sample command supports a --with-storage flag that pre-configures storage for your new project:
tawa sample my-api --api --with-storageThis automatically:
- Adds
@insureco/storageas a dependency inpackage.json - Adds a
spec.storageentry incatalog-info.yaml - Creates a
src/storage.tshelper file with a pre-configuredcreateStorage()instance
Related
- catalog-info.yaml Reference — full field reference including storage section
- Databases — database provisioning for MongoDB, Redis, and Neo4j
- Environment Variables — how env vars are injected into your pods
- Config & Secrets — managing secrets and config vars via the CLI