Platform Architecture

How the Tawa platform is built, what each service does, and why they are separate.

The Big Picture

Tawa is a deployment platform made up of six core services. Each one owns a single responsibility and communicates with the others over internal APIs. No service shares a database with another.

Developer
  |
  |  tawa deploy
  v
Builder ──────> Koko (registry)
  |               |
  |               v
  |            Janus (gateway) ──> Your Service
  |               |
  |               v
  |            Bio-ID (identity)
  |
  v
Wallet (gas gate) ──> Cloudflare (DNS)

Core Services

Builder

The build pipeline. When you run tawa deploy, the builder clones your repo, generates a Dockerfile (if needed), builds the image, pushes it to the container registry, and orchestrates the full deploy sequence: database provisioning, OAuth setup, Helm deploy, Koko registration, and DNS configuration.

URLbuilder.tawa.insureco.io
OwnsBuild records, deploy logs, managed secrets
Talks toKoko, Wallet, Bio-ID, Cloudflare, K8s API

Koko

The service registry, configuration store, and database proxy. Every deployed service is registered in Koko with its upstream URL, routes, health endpoint, and metadata. Janus reads from Koko to know how to route traffic. The builder writes to Koko after every successful deploy. Koko also exposes an authenticated database proxy API that lets developers inspect their provisioned MongoDB databases remotely — retrieving database stats, listing collections, sampling documents, and running read-only queries.

URLInternal only (cluster DNS)
OwnsService registry, config store, feature flags, domain bindings, database proxy
Talks toMongoDB, Redis (pub/sub notifications to Janus)

Janus

The API gateway. All external API traffic enters through Janus, which handles authentication enforcement, request routing, and gas metering. Janus polls Koko for the current route table and proxies requests to the correct upstream service inside the cluster.

URLapi.tawa.insureco.io
OwnsGas metering events, request logs
Talks toKoko (routes), Bio-ID (token verification), Wallet (gas charges)

Bio-ID

The identity provider. Handles user registration, login, OAuth2 flows, MFA, and session management. Every service gets an OAuth client auto-provisioned on deploy — developers never create credentials manually. Bio-ID also exposes a token introspection endpoint at POST /api/auth/introspect that validates tokens and returns the user’s organization context. The CLI and platform services use this to verify authentication and resolve org membership.

URLbio.tawa.insureco.io
OwnsUser identities, OAuth clients, sessions, MFA secrets
Talks toMongoDB (user data), Redis (sessions)

Wallet

The token economy. Every org has a wallet with a gas balance. The builder checks the wallet before each deploy (the “deploy gate”) to ensure 3 months of hosting reserve. Janus charges transaction gas for API calls. See Gas & Wallets for token economics.

URLiec-wallet.tawa.insureco.io
OwnsWallet balances, gas ledger, token purchases
Talks toMongoDB (ledger)

tawa-web

The platform console. This is the site you are reading right now. It provides the developer portal, documentation, PlugINS store, and service dashboard. It reads from other services but owns no platform state — if tawa-web goes down, deploys and routing still work.

URLtawa.insureco.io
OwnsUI only — no platform state
Talks toKoko (service list), Wallet (balances), Bio-ID (auth)

Why Separate Services?

Each service is independently deployable and has its own failure domain. This matters because:

  • Builder depends on Koko at deploy time. If Koko were inside tawa-web and the frontend went down, all deploys across the platform would break.
  • Janus depends on Koko for routing. Route discovery is a hot path — it must be fast and independent of any UI.
  • Different scaling profiles. Koko handles high-frequency API calls from the builder and Janus. tawa-web handles browser sessions. Mixing them would let a traffic spike in one starve the other.
  • Data ownership. Each service owns its database collections. No service reads another’s database directly — they use APIs. This keeps the data model encapsulated and avoids tight coupling.

Deploy Flow

Here is what happens step by step when you run tawa deploy --prod:

  1. CLI authenticates with Bio-ID and sends a build request to the Builder. Authentication supports two paths: browser-based OAuth (default for interactive use) and a stored credentials file at ~/.tawa/credentials for CI pipelines and LLM agents — the CLI auto-re-authenticates using stored credentials when a refresh token expires
  2. Builder checks the deploy gate — calls Wallet to verify your org has enough gas reserve (3 months of hosting cost)
  3. Builder clones your repo, reads catalog-info.yaml, generates a Dockerfile if needed
  4. Docker image is built and pushed to the container registry
  5. Builder provisions database secrets (MONGODB_URI, REDIS_URL) as K8s Secrets
  6. Builder creates/updates an OAuth client in Bio-ID (BIO_CLIENT_ID, BIO_CLIENT_SECRET)
  7. Builder resolves internal dependencies to cluster DNS URLs via Koko
  8. Helm deploys the service to Kubernetes with all env vars injected
  9. Builder registers the service in Koko (upstream URL, routes, metadata)
  10. Builder creates a DNS record in Cloudflare pointing to the cluster ingress
  11. Janus picks up the new routes from Koko and begins proxying traffic

Every step is idempotent. Deploying again updates rather than creating duplicates.

Traffic Flow

Once deployed, traffic reaches your service through two paths:

Direct traffic (most services)

User -> Cloudflare -> NGINX Ingress -> Your Service Pod
         (DNS)      (TLS termination)    (K8s cluster)

Your service gets its own subdomain (e.g. my-svc.tawa.insureco.io) and Cloudflare routes directly to the cluster ingress. This is the default for all services.

API gateway traffic (opt-in via routes)

Client -> api.tawa.insureco.io -> Janus -> Your Service Pod
            (Cloudflare)          (auth, gas)   (K8s cluster)

If your catalog-info.yaml defines routes with auth: required, those routes are registered in Janus. Clients call the gateway URL, Janus verifies the token with Bio-ID, charges gas via Wallet, and proxies to your service.

Data Ownership

ServiceDatabaseWhat it stores
KokoMongoDB + RedisService registry, configs, feature flags, domain bindings, deployments
Bio-IDMongoDB + RedisUser identities, OAuth clients, sessions, MFA
WalletMongoDBOrg wallets, gas ledger, token purchases
BuilderMongoDB + RedisBuild records, logs, managed secrets (encrypted), build queue
JanusMongoDBGas metering events, request audit logs
tawa-webNoneStateless — reads from other services via API

No service accesses another’s database. All cross-service communication uses HTTP APIs. This means any service can be replaced, scaled, or redeployed independently without affecting the others.

Environments

Each service is deployed into its own Kubernetes namespace per environment. Namespaces are named {service}-{environment}:

EnvironmentNamespaceURLPurpose
Sandboxmy-svc-sandboxmy-svc.sandbox.tawa.insureco.ioDevelopment and testing
UATmy-svc-uatmy-svc.uat.tawa.insureco.ioUser acceptance testing
Productionmy-svc-prodmy-svc.tawa.insureco.ioLive traffic

Each environment gets its own database connection strings, OAuth credentials, and secrets. Data is fully isolated between environments.

Key Principles

  • Convention over configuration. The builder infers almost everything from catalog-info.yaml and your framework. No Dockerfiles, Helm charts, or K8s manifests needed.
  • Everything is idempotent. Deploying again updates. Database provisioning is additive. OAuth clients are upserted. DNS records are reconciled.
  • Fail independently. If Janus goes down, direct service URLs still work. If tawa-web goes down, deploys still work. If Wallet is unreachable, the deploy gate is fail-open.
  • No shared databases. Services communicate via APIs. This makes it possible to evolve, scale, or replace any service without a coordinated migration.
  • Secrets are managed, not manual. Database URIs, OAuth credentials, and custom secrets are all injected by the builder. Developers never create K8s secrets or Helm values by hand.

Further Reading