Object Storage

S3-compatible object storage for files, images, and documents. Declare buckets in catalog-info.yaml — credentials and SDKs are provisioned automatically.

How it works

When you declare storage buckets in catalog-info.yaml, the builder automatically provisions them during each deploy. Here's what happens step by step:

  1. Builder reads spec.storage from your catalog-info.yaml
  2. For each storage entry, it provisions a MinIO bucket and generates dynamic credentials via HashiCorp Vault
  3. A Vault Agent sidecar is injected into your pod. It writes S3 credentials to /vault/secrets/storage in env-file format and auto-rotates them on a schedule
  4. The Docker entrypoint sources the Vault file at startup, making the credentials available as environment variables
  5. The @insureco/storage SDK reads directly from the Vault file before each operation, so it automatically picks up rotated credentials without a restart
Zero config: You don't need to create buckets, manage credentials, or handle credential rotation manually. The builder and Vault handle everything automatically.

YAML configuration

Add a storage array under spec in your catalog-info.yaml:

spec:
  storage:
    - name: default
      tier: s3-sm

The bucket name follows the pattern <service>-<env>-<name>. For example, a service called my-api deployed to production with a storage entry named default creates a bucket called my-api-prod-default.

Multiple buckets

You can declare multiple storage entries to separate concerns:

spec:
  storage:
    - name: uploads
      tier: s3-md
    - name: exports
      tier: s3-sm

Each bucket gets its own environment variable. The default bucket uses S3_BUCKET, while named buckets use the pattern S3_<NAME>_BUCKET (e.g., S3_UPLOADS_BUCKET, S3_EXPORTS_BUCKET).

Storage tiers

Each bucket is assigned a tier that determines its capacity and monthly gas cost:

TierCapacityGas/MonthUSD/Month
s3-sm1 GB200$2
s3-md5 GB800$8
s3-lg25 GB3,000$30
s3-xl100 GB10,000$100
Start small: You can upgrade a bucket's tier later by changing the tier value in catalog-info.yaml and redeploying. Data is preserved across tier changes.

Credential management

Storage credentials are managed by Vault Agent, which writes them to /vault/secrets/storage inside your pod. The file contains these variables in env-file format:

VariableDescriptionExample
S3_HOSTMinIO server host64.23.181.20
S3_PORTMinIO server port9000
S3_ACCESS_KEY_IDDynamic credential (auto-rotated)(auto-generated)
S3_SECRET_ACCESS_KEYDynamic credential (auto-rotated)(auto-generated)
S3_BUCKETDefault bucket namemy-api-prod-default
S3_UPLOADS_BUCKETNamed bucket (example)my-api-prod-uploads

Credentials are scoped to only the buckets declared by your service. They cannot access other services' buckets.

Auto-rotation: Vault rotates S3 credentials periodically. The Vault Agent sidecar updates the file with new credentials automatically. If you use the @insureco/storage SDK, this is handled transparently — the SDK reads from the Vault file before each operation and detects rotated credentials. If you use a raw S3 client, you are responsible for re-reading the Vault file when you get an access-denied error.

Using the SDK

The @insureco/storage SDK (v0.2.0+) is Vault-native. In production, it reads credentials directly from the Vault secrets file and automatically handles credential rotation. In local development, it falls back to environment variables.

Install

npm install @insureco/storage

Default bucket

import { createStorage } from '@insureco/storage'

// Default bucket — reads from Vault file in production, env vars locally
const storage = createStorage()

// Upload a file
await storage.upload('photos/avatar.jpg', imageBuffer)

// Download a file
const data = await storage.download('photos/avatar.jpg')

// Presigned URL (share without exposing credentials)
const url = await storage.getSignedUrl('photos/avatar.jpg', { expiresIn: 3600 })

Named buckets

Pass the bucket name to createStorage to target a specific bucket:

const uploads = createStorage('uploads')  // reads S3_UPLOADS_BUCKET
const exports = createStorage('exports')  // reads S3_EXPORTS_BUCKET

await uploads.upload('invoices/inv-001.pdf', pdfBuffer)
const report = await exports.download('reports/monthly.csv')

Credential resolution order

The SDK resolves credentials in this order:

  1. Vault file (/vault/secrets/storage) — used in production pods, supports auto-rotation
  2. Environment variables (S3_HOST, S3_ACCESS_KEY_ID, etc.) — used for local development
Never set S3 credentials manually. Do not use tawa config set for S3 variables. The builder provisions them via Vault automatically. Setting them manually bypasses rotation and will cause access-denied errors when Vault rotates the credentials.

SDK API reference

MethodReturnsDescription
upload(key, data, opts?)voidUpload Buffer, string, or stream
download(key)BufferDownload as Buffer
downloadStream(key)ReadableDownload as stream (large files)
delete(keys)voidDelete one or array of keys
list(prefix?)ListItem[]List objects
exists(key)booleanCheck if object exists
stat(key)ObjectStatGet size, etag, lastModified, contentType
getSignedUrl(key, opts?)stringPresigned download URL
getUploadUrl(key, opts?)stringPresigned upload URL

Local development

For local development, set the env vars in a .env file and run MinIO in Docker:

# .env (local development)
S3_HOST=127.0.0.1
S3_PORT=9000
S3_ACCESS_KEY_ID=minioadmin
S3_SECRET_ACCESS_KEY=minioadmin
S3_BUCKET=my-app-dev

Running MinIO locally

docker run -p 9000:9000 -p 9001:9001 minio/minio server /data --console-address ":9001"

The MinIO console is available at http://localhost:9001 where you can browse buckets, upload files, and manage access policies. The default credentials are minioadmin / minioadmin.

SDK works locally too: When no Vault file exists (i.e., outside a Tawa pod), the SDK falls back to reading S3_* environment variables. Your code works identically in local dev and production — only the credential source changes.

Scaffold support

The tawa sample command supports a --with-storage flag that pre-configures storage for your new project:

tawa sample my-api --api --with-storage

This automatically:

  • Adds @insureco/storage as a dependency in package.json
  • Adds a spec.storage entry in catalog-info.yaml
  • Creates a src/storage.ts helper file with a pre-configured createStorage() instance

Related