Databases
How database provisioning works on the Tawa platform — from YAML config to a running connection in your pod.
How it works
When you declare databases in catalog-info.yaml, the builder automatically provisions the connection during each deploy. Here's what happens step by step:
- Builder reads
spec.databasesfrom yourcatalog-info.yaml - For each database entry, it generates a connection string using the platform's database hosts
- It creates a Kubernetes Secret in your service's namespace (e.g.,
my-api-db-mongodb) - The secret is mounted as an environment variable in your pod (e.g.,
MONGODB_URI) - Your app reads the env var on startup and connects
Supported types
Three database types are supported:
| Type | Env var | Use case |
|---|---|---|
mongodb | MONGODB_URI | Document storage, primary datastore for most services |
redis | REDIS_URL | Caching, sessions, queues (BullMQ), pub/sub |
neo4j | NEO4J_URI | Graph data, relationship queries, knowledge graphs |
YAML example
spec:
databases:
- type: mongodb
- type: redisThis gives your pod two environment variables: MONGODB_URI and REDIS_URL, both ready to use.
type: postgres, type: mysql, or any other value will fail during tawa preflight.Connection strings
The platform generates these connection string formats:
| Type | Format | Example (prod) |
|---|---|---|
mongodb | mongodb://<host>:27017/<service>-<env> | mongodb://db.tawa.insureco.io:27017/my-api-prod |
redis | redis://<host>:6379/0 | redis://db.tawa.insureco.io:6379/0 |
neo4j | bolt://<host>:7687 | bolt://db.tawa.insureco.io:7687 |
Database naming
MongoDB databases are named <service>-<environment> by default. This means sandbox and production get separate databases automatically:
| Service | Environment | MongoDB database name |
|---|---|---|
| my-api | sandbox | my-api-sandbox |
| my-api | prod | my-api-prod |
| my-api | uat | my-api-uat |
Custom database name
You can override the default MongoDB database name with the name field:
spec:
databases:
- type: mongodb
name: shared-data # Uses "shared-data" instead of "<service>-<env>"myapp:sessions:*) to avoid collisions.Connecting from your app
MongoDB (Mongoose)
import mongoose from 'mongoose'
const uri = process.env.MONGODB_URI
if (!uri) throw new Error('MONGODB_URI not configured')
await mongoose.connect(uri)
console.log('Connected to MongoDB:', mongoose.connection.db?.databaseName)Redis (ioredis)
import Redis from 'ioredis'
const url = process.env.REDIS_URL
if (!url) throw new Error('REDIS_URL not configured')
const redis = new Redis(url)
await redis.set('myapp:health', 'ok')Redis (BullMQ)
import { Queue, Worker } from 'bullmq'
const connection = { url: process.env.REDIS_URL }
const queue = new Queue('jobs', { connection })
const worker = new Worker('jobs', async (job) => {
// process job
}, { connection })Neo4j
import neo4j from 'neo4j-driver'
const uri = process.env.NEO4J_URI
if (!uri) throw new Error('NEO4J_URI not configured')
const driver = neo4j.driver(uri)
const session = driver.session()
const result = await session.run('MATCH (n) RETURN count(n) AS count')Environments & isolation
Each environment gets its own Kubernetes namespace and its own database secrets:
| Environment | K8s namespace | Secret name | MongoDB database |
|---|---|---|---|
| sandbox | my-api-sandbox | my-api-db-mongodb | my-api-sandbox |
| prod | my-api-prod | my-api-db-mongodb | my-api-prod |
| uat | my-api-uat | my-api-db-mongodb | my-api-uat |
Secrets have the same name in each namespace but contain different connection strings. This means your app code is identical across environments — only the injected env var changes.
What the builder creates
For each database entry, the builder:
- Creates a K8s Secret named
<service>-db-<type>in the target namespace - Injects the env var into your pod via Helm values
- The operation is idempotent — deploying again updates the secret without creating duplicates
Local development
For local development, set the env vars in a .env file:
# .env (local development)
MONGODB_URI=mongodb://localhost:27017/my-api-dev
REDIS_URL=redis://localhost:6379/0
NEO4J_URI=bolt://localhost:7687Your app reads process.env.MONGODB_URI whether it's running locally or in a pod — the value just comes from a different source.
Running databases locally
# Start MongoDB, Redis, and Neo4j with Docker
docker run -d --name mongo -p 27017:27017 mongo:7
docker run -d --name redis -p 6379:6379 redis:7
docker run -d --name neo4j -p 7474:7474 -p 7687:7687 \
-e NEO4J_AUTH=none neo4j:5Direct access
Need to inspect data in Studio 3T or mongosh? The tawa db commands give you temporary, firewall-whitelisted access to your MongoDB databases without SSH tunnels.
Connect to a database
# Whitelist your IP for sandbox (default, 8 hours)
tawa db connect my-svc
# Production, custom TTL
tawa db connect my-svc --prod --ttl 4
# UAT environment
tawa db connect my-svc --uatThis returns a connection string you can paste directly into Studio 3T, Compass, or mongosh:
mongodb://svc_my-svc_prod_ro:[email protected]:27017/my-svc-prod?authSource=my-svc-prodAccess levels
Your access level is determined by your org role:
| Org role | MongoDB access | Notes |
|---|---|---|
| viewer | — | Cannot request direct access |
| member | read-only | Can browse collections and run queries |
| admin | read-write | Full access, can also revoke others |
How it works
- Your public IP is detected automatically (via Cloudflare headers)
- The builder adds your IP to a kernel-level firewall whitelist with the requested TTL
- A read-only or read-write MongoDB user is provisioned (based on your org role)
- You receive a connection string with embedded credentials
- After the TTL expires, your IP is automatically removed from the whitelist and the connection drops
List active whitelist entries
# List all active entries for a service
tawa db whitelist my-svc
# Filter by environment
tawa db whitelist my-svc --prodRevoke access early
# Revoke a specific whitelist entry (admin only)
tawa db revoke my-svc <entry-id>Revoking immediately removes the IP from the firewall. Any open connections will drop on the next query.
FAQ
Do I need to create the database manually?
No. MongoDB auto-creates databases on first write. Redis and Neo4j are shared instances that are always running. You just declare the type in your YAML and use the injected env var.
Can I use the same database across services?
Yes. Use the name field to point multiple services at the same MongoDB database:
# In service-a/catalog-info.yaml
spec:
databases:
- type: mongodb
name: shared-db
# In service-b/catalog-info.yaml
spec:
databases:
- type: mongodb
name: shared-dbWhere does MONGODB_URI come from at runtime?
During deploy, the builder creates a Kubernetes Secret (e.g.,my-api-db-mongodb) in your namespace. The Helm chart mounts that secret as an environment variable in your pod. Your code reads it with process.env.MONGODB_URI.
What if I deploy without databases in my YAML?
Nothing happens — no secrets are created, no env vars are injected. If your code tries to read process.env.MONGODB_URI, it will be undefined.
Can I add a database after the first deploy?
Yes. Add the database entry to catalog-info.yaml and redeploy. The builder will create the new secret and inject the env var on the next deploy.
Is my data persisted between deploys?
Yes. Databases run on persistent infrastructure outside of Kubernetes. Redeploying your service doesn't touch the data.
Can I connect to my database from my laptop?
Yes — use tawa db connect my-svc --prod to whitelist your IP and get a connection string. See the Direct access section above.
Related
- catalog-info.yaml Reference — full field reference including databases section
- Logs & Monitoring — debugging database connection issues
- Getting Started — deploy your first service