Platform Architecture
How the Tawa platform is built, what each service does, and why they are separate.
The Big Picture
Tawa is a deployment platform made up of six core services. Each one owns a single responsibility and communicates with the others over internal APIs. No service shares a database with another.
Developer
|
| tawa deploy
v
Builder ──────> Koko (registry)
| |
| v
| Janus (gateway) ──> Your Service
| |
| v
| Bio-ID (identity)
|
v
Wallet (gas gate) ──> Cloudflare (DNS)Core Services
Builder
The build pipeline. When you run tawa deploy, the builder clones your repo, generates a Dockerfile (if needed), builds the image, pushes it to the container registry, and orchestrates the full deploy sequence: database provisioning, OAuth setup, Helm deploy, Koko registration, and DNS configuration.
| URL | builder.tawa.insureco.io |
| Owns | Build records, deploy logs, managed secrets |
| Talks to | Koko, Wallet, Bio-ID, Cloudflare, K8s API |
Koko
The service registry, configuration store, and database proxy. Every deployed service is registered in Koko with its upstream URL, routes, health endpoint, and metadata. Janus reads from Koko to know how to route traffic. The builder writes to Koko after every successful deploy. Koko also exposes an authenticated database proxy API that lets developers inspect their provisioned MongoDB databases remotely — retrieving database stats, listing collections, sampling documents, and running read-only queries.
| URL | Internal only (cluster DNS) |
| Owns | Service registry, config store, feature flags, domain bindings, database proxy |
| Talks to | MongoDB, Redis (pub/sub notifications to Janus) |
Janus
The API gateway. All external API traffic enters through Janus, which handles authentication enforcement, request routing, and gas metering. Janus polls Koko for the current route table and proxies requests to the correct upstream service inside the cluster.
| URL | api.tawa.insureco.io |
| Owns | Gas metering events, request logs |
| Talks to | Koko (routes), Bio-ID (token verification), Wallet (gas charges) |
Bio-ID
The identity provider. Handles user registration, login, OAuth2 flows, MFA, and session management. Every service gets an OAuth client auto-provisioned on deploy — developers never create credentials manually. Bio-ID also exposes a token introspection endpoint at POST /api/auth/introspect that validates tokens and returns the user’s organization context. The CLI and platform services use this to verify authentication and resolve org membership.
| URL | bio.tawa.insureco.io |
| Owns | User identities, OAuth clients, sessions, MFA secrets |
| Talks to | MongoDB (user data), Redis (sessions) |
Wallet
The token economy. Every org has a wallet with a gas balance. The builder checks the wallet before each deploy (the “deploy gate”) to ensure 3 months of hosting reserve. Janus charges transaction gas for API calls. See Gas & Wallets for token economics.
| URL | iec-wallet.tawa.insureco.io |
| Owns | Wallet balances, gas ledger, token purchases |
| Talks to | MongoDB (ledger) |
tawa-web
The platform console. This is the site you are reading right now. It provides the developer portal, documentation, PlugINS store, and service dashboard. It reads from other services but owns no platform state — if tawa-web goes down, deploys and routing still work.
| URL | tawa.insureco.io |
| Owns | UI only — no platform state |
| Talks to | Koko (service list), Wallet (balances), Bio-ID (auth) |
Why Separate Services?
Each service is independently deployable and has its own failure domain. This matters because:
- Builder depends on Koko at deploy time. If Koko were inside tawa-web and the frontend went down, all deploys across the platform would break.
- Janus depends on Koko for routing. Route discovery is a hot path — it must be fast and independent of any UI.
- Different scaling profiles. Koko handles high-frequency API calls from the builder and Janus. tawa-web handles browser sessions. Mixing them would let a traffic spike in one starve the other.
- Data ownership. Each service owns its database collections. No service reads another’s database directly — they use APIs. This keeps the data model encapsulated and avoids tight coupling.
Deploy Flow
Here is what happens step by step when you run tawa deploy --prod:
- CLI authenticates with Bio-ID and sends a build request to the Builder. Authentication supports two paths: browser-based OAuth (default for interactive use) and a stored credentials file at
~/.tawa/credentialsfor CI pipelines and LLM agents — the CLI auto-re-authenticates using stored credentials when a refresh token expires - Builder checks the deploy gate — calls Wallet to verify your org has enough gas reserve (3 months of hosting cost)
- Builder clones your repo, reads
catalog-info.yaml, generates a Dockerfile if needed - Docker image is built and pushed to the container registry
- Builder provisions database secrets (MONGODB_URI, REDIS_URL) as K8s Secrets
- Builder creates/updates an OAuth client in Bio-ID (BIO_CLIENT_ID, BIO_CLIENT_SECRET)
- Builder resolves internal dependencies to cluster DNS URLs via Koko
- Helm deploys the service to Kubernetes with all env vars injected
- Builder registers the service in Koko (upstream URL, routes, metadata)
- Builder creates a DNS record in Cloudflare pointing to the cluster ingress
- Janus picks up the new routes from Koko and begins proxying traffic
Every step is idempotent. Deploying again updates rather than creating duplicates.
Traffic Flow
Once deployed, traffic reaches your service through two paths:
Direct traffic (most services)
User -> Cloudflare -> NGINX Ingress -> Your Service Pod
(DNS) (TLS termination) (K8s cluster)Your service gets its own subdomain (e.g. my-svc.tawa.insureco.io) and Cloudflare routes directly to the cluster ingress. This is the default for all services.
API gateway traffic (opt-in via routes)
Client -> api.tawa.insureco.io -> Janus -> Your Service Pod
(Cloudflare) (auth, gas) (K8s cluster)If your catalog-info.yaml defines routes with auth: required, those routes are registered in Janus. Clients call the gateway URL, Janus verifies the token with Bio-ID, charges gas via Wallet, and proxies to your service.
Data Ownership
| Service | Database | What it stores |
|---|---|---|
| Koko | MongoDB + Redis | Service registry, configs, feature flags, domain bindings, deployments |
| Bio-ID | MongoDB + Redis | User identities, OAuth clients, sessions, MFA |
| Wallet | MongoDB | Org wallets, gas ledger, token purchases |
| Builder | MongoDB + Redis | Build records, logs, managed secrets (encrypted), build queue |
| Janus | MongoDB | Gas metering events, request audit logs |
| tawa-web | None | Stateless — reads from other services via API |
No service accesses another’s database. All cross-service communication uses HTTP APIs. This means any service can be replaced, scaled, or redeployed independently without affecting the others.
Environments
Each service is deployed into its own Kubernetes namespace per environment. Namespaces are named {service}-{environment}:
| Environment | Namespace | URL | Purpose |
|---|---|---|---|
| Sandbox | my-svc-sandbox | my-svc.sandbox.tawa.insureco.io | Development and testing |
| UAT | my-svc-uat | my-svc.uat.tawa.insureco.io | User acceptance testing |
| Production | my-svc-prod | my-svc.tawa.insureco.io | Live traffic |
Each environment gets its own database connection strings, OAuth credentials, and secrets. Data is fully isolated between environments.
Key Principles
- Convention over configuration. The builder infers almost everything from
catalog-info.yamland your framework. No Dockerfiles, Helm charts, or K8s manifests needed. - Everything is idempotent. Deploying again updates. Database provisioning is additive. OAuth clients are upserted. DNS records are reconciled.
- Fail independently. If Janus goes down, direct service URLs still work. If tawa-web goes down, deploys still work. If Wallet is unreachable, the deploy gate is fail-open.
- No shared databases. Services communicate via APIs. This makes it possible to evolve, scale, or replace any service without a coordinated migration.
- Secrets are managed, not manual. Database URIs, OAuth credentials, and custom secrets are all injected by the builder. Developers never create K8s secrets or Helm values by hand.
Further Reading
- Getting Started — deploy your first service
- catalog-info.yaml Reference — all configuration options
- Databases — provisioning and connection details
- Gas & Wallets — token economics and the deploy gate
- OAuth Integration — Bio-ID auto-provisioning