Isofold Deployment Topology

Isofold is designed to run efficiently in cloud, edge, and self-hosted environments. This page describes how traffic flows through the system and how components interact.


Request Flow

At a high level, a typical request path looks like this:

Client → Isofold Proxy → Rewrite Engine → Cost/Verify Layer → Data Warehouse → Response

Each layer can be tuned or observed independently depending on deployment mode.


Hosted Mode (Isofold Cloud)

In hosted mode, traffic flows through an edge-deployed proxy (e.g. Fly.io) that routes to the appropriate warehouse:

  • Minimal added latency (typically < 10ms)
  • Global traffic routed to nearest edge
  • Internal routing and queueing within Isofold’s infrastructure
[ Client ]

[ Fly Edge Proxy ]

[ Rewrite Engine + Cost Model ]

[ BigQuery / Snowflake / Aurora ]

Each tenant has an isolated proxy URL and scoped configuration.


Self-Hosted Mode (VPC / Local)

When running inside your own network:

  • The proxy is deployed in your VPC or on-prem
  • Rewrite + verification happens inside the container
  • You control outbound access, logging, and observability
[ Client ]

[ Local Isofold Proxy ]

[ Internal Warehouse (e.g. Aurora in VPC) ]

This model is ideal for:

  • Compliance-sensitive workloads
  • Low-latency, high-throughput environments
  • Custom warehouse integrations

Multi-region Support

When deployed with Fly.io or GCP global load balancers:

  • Requests are routed to the closest live instance
  • All components are stateless and horizontally scalable
  • Proxy-level metrics can be pushed to Prometheus, GCP, or Datadog

Next: Learn more about Security & Compliance