Building an API with Cloudflare Workers and D1: From Zero to Production

Building an API with Cloudflare Workers and D1#

This tutorial walks through building a production API on Cloudflare Workers with a D1 database, KV caching, rate limiting, full-text search, and request logging. The patterns come from a real production deployment – not a toy example.

By the end you will have: a TypeScript Worker handling multiple API routes, a D1 database with FTS5 full-text search, KV-based caching and rate limiting, CORS support, request logging with IP hashing for privacy, and a deployment to Cloudflare’s global network.

CDN and Edge Computing Patterns

CDN and Edge Computing Patterns#

A CDN (Content Delivery Network) caches content at edge locations close to users, reducing latency and offloading traffic from origin servers. Edge computing extends this by running custom code at those edge locations, enabling request transformation, authentication, A/B testing, and dynamic content generation without round-tripping to an origin server.

CDN Cache Fundamentals#

Cache-Control Headers#

The origin server controls CDN caching behavior through HTTP headers. Getting these right is the single most impactful CDN optimization.

Choosing a Deployment Platform for APIs and MVPs: Cloudflare vs AWS vs Vercel vs Fly.io

Choosing a Deployment Platform for APIs and MVPs#

Picking a deployment platform early in a project matters more than most teams realize. The platform determines your cost floor, your scaling ceiling, your deployment workflow, and how much operational overhead you carry. Switching later is possible but never free – you are always migrating data, rewriting config, and updating DNS.

This guide compares four platforms that cover the most common deployment scenarios: Cloudflare (Workers + D1 + Pages), AWS (Lambda + API Gateway + RDS + S3), Vercel (Pro + serverless functions), and Fly.io (Apps + Postgres). Each has a genuine sweet spot. None is best for everything.

Cloud Vendor Product Matrix: Comparing Cloudflare, AWS, Azure, and GCP

Cloud Vendor Product Matrix#

Choosing between cloud vendors requires mapping equivalent services across providers. AWS has 200+ services. Azure has 200+. GCP has 100+. Cloudflare has 20+ but they are tightly integrated and edge-native. This article maps the services that matter for most applications – compute, serverless, databases, storage, networking, and observability – across all four vendors with pricing, availability, and portability for each.

How to Use This Matrix#

Each section maps equivalent products across vendors, then provides:

Cloudflare Workers as a Full-Stack Platform: Workers, D1, KV, R2, and Pages

Cloudflare Workers as a Full-Stack Platform#

Cloudflare started as a CDN and DDoS protection service. It is now a complete development platform. Workers provide serverless compute at 330+ edge locations. D1 provides a serverless SQLite database. KV provides a globally distributed key-value store. R2 provides S3-compatible object storage with zero egress fees. Pages provides static site hosting with git-integrated deploys. Durable Objects provide stateful, single-threaded coordination primitives. Queues provide async message processing between Workers.

Zero-Egress Architecture with Cloudflare R2: Eliminating Data Transfer Costs

Zero-Egress Architecture with Cloudflare R2#

Every major cloud provider charges you to download your own data. AWS S3 charges $0.09/GB. Google Cloud Storage charges $0.12/GB. Azure Blob charges $0.087/GB. These egress fees are the most unpredictable line item on cloud bills – they scale with success. The more users download your data, the more you pay.

Cloudflare R2 charges $0 for egress. Zero. Unlimited. Every download is free, whether it is 1 GB or 100 TB. R2 uses the S3-compatible API, so existing tools and SDKs work without changes. This single pricing difference changes how you architect storage, serving, and cross-cloud data flow.