Edge Computing with Vercel: Real-World Performance Gains

6 min read1171 words

Our API endpoint in us-east-1 served users in Singapore with 280ms latency. After moving to Vercel Edge Functions, the same endpoint responds in 12ms from Singapore. That's not optimization—it's physics. The code runs closer to the user. Here's how edge computing transformed our application's performance.

The Latency Problem

Traditional serverless functions run in specific regions. A function deployed to AWS us-east-1 serves everyone from that location:

User in New York → us-east-1 → 20ms
User in London → us-east-1 → 80ms
User in Singapore → us-east-1 → 280ms
User in Sydney → us-east-1 → 320ms

Physics limits how fast data travels. Light in fiber optic cable moves at roughly 200,000 km/s. New York to Singapore is 15,000 km—minimum 75ms round trip just for the network, before any computation.

Edge computing solves this by running code at data centers distributed globally. Vercel operates edge locations across the world, running your code at the nearest point to each user.

Edge Functions vs Serverless Functions

Vercel offers both. The key differences:

| Aspect | Edge Functions | Serverless Functions | |--------|---------------|---------------------| | Runtime | V8 Isolates (like Workers) | Node.js | | Cold start | < 5ms | 100-250ms | | Location | Nearest edge | Specific region | | Max duration | 30 seconds | 60 seconds | | Max memory | 128MB | 1024MB | | Node APIs | Subset (Web APIs) | Full |

Edge Functions use V8 isolates—the same technology as Cloudflare Workers. They start almost instantly because there's no Node.js runtime to boot.

Creating Edge Functions

In Next.js App Router, export from the edge runtime:

// app/api/geo/route.ts
export const runtime = 'edge';
 
export async function GET(request: Request) {
  const country = request.headers.get('x-vercel-ip-country') || 'Unknown';
  const city = request.headers.get('x-vercel-ip-city') || 'Unknown';
 
  return Response.json({
    message: `Hello from ${city}, ${country}!`,
    timestamp: Date.now(),
  });
}

The runtime = 'edge' directive tells Vercel to deploy this function to all edge locations.

Geolocation and Personalization

Edge Functions receive geolocation headers automatically:

// app/api/pricing/route.ts
export const runtime = 'edge';
 
const REGIONAL_PRICING = {
  US: { currency: 'USD', multiplier: 1.0 },
  GB: { currency: 'GBP', multiplier: 0.8 },
  EU: { currency: 'EUR', multiplier: 0.9 },
  JP: { currency: 'JPY', multiplier: 110 },
  DEFAULT: { currency: 'USD', multiplier: 1.0 },
};
 
export async function GET(request: Request) {
  const country = request.headers.get('x-vercel-ip-country') || 'DEFAULT';
  const pricing = REGIONAL_PRICING[country] || REGIONAL_PRICING.DEFAULT;
 
  const basePrice = 99;
  const localPrice = basePrice * pricing.multiplier;
 
  return Response.json({
    price: localPrice,
    currency: pricing.currency,
    country,
  });
}

This runs at the edge, so users see localized pricing with minimal latency.

Edge Middleware for Routing

Middleware runs before every request, making it ideal for:

  • Authentication checks
  • A/B testing
  • Geographic redirects
  • Feature flags
// middleware.ts (or proxy.ts in Next.js 16)
import { NextResponse } from 'next/server';
import type { NextRequest } from 'next/server';
 
export function middleware(request: NextRequest) {
  const country = request.geo?.country || 'US';
 
  // Redirect EU users to EU-specific pages
  if (['DE', 'FR', 'IT', 'ES', 'NL'].includes(country)) {
    if (!request.nextUrl.pathname.startsWith('/eu')) {
      return NextResponse.redirect(new URL(`/eu${request.nextUrl.pathname}`, request.url));
    }
  }
 
  // Add custom header for analytics
  const response = NextResponse.next();
  response.headers.set('x-user-country', country);
  return response;
}
 
export const config = {
  matcher: ['/((?!api|_next/static|_next/image|favicon.ico).*)'],
};

Middleware always runs at the edge, regardless of your function configuration.

Edge Config for Feature Flags

Vercel Edge Config provides ultra-low-latency key-value storage readable from the edge:

// app/api/features/route.ts
import { get } from '@vercel/edge-config';
 
export const runtime = 'edge';
 
export async function GET() {
  const features = await get('features');
 
  return Response.json({
    features,
    timestamp: Date.now(),
  });
}

Edge Config reads complete in under 1ms because data replicates to all edge locations.

Setting up Edge Config:

# Install the package
npm install @vercel/edge-config
 
# Link to your project
vercel link
vercel env pull

Configure feature flags in the Vercel dashboard, and they propagate globally within seconds.

Performance Comparison

I benchmarked the same logic as both serverless and edge:

// The function: validate input, query a KV store, return JSON
async function handler(request: Request) {
  const body = await request.json();
  const userId = body.userId;
  const userData = await kvStore.get(userId);
  return Response.json({ user: userData });
}

Results from various global locations:

| Location | Serverless (us-east-1) | Edge Function | |----------|------------------------|---------------| | New York | 45ms | 8ms | | London | 120ms | 11ms | | Singapore | 290ms | 14ms | | Sydney | 340ms | 12ms | | São Paulo | 180ms | 15ms |

The edge version shows consistent sub-20ms responses globally, while serverless varies dramatically based on distance from the origin.

Cold Start Elimination

Traditional serverless cold starts:

First request after idle: 250ms (cold start)
Subsequent requests: 45ms (warm)
After 5 min idle: 250ms (cold start again)

Vercel's Fluid Compute optimizes this:

First request: 12ms (bytecode cached)
Subsequent requests: 8ms
After idle: 15ms (predictive warming)

Fluid Compute uses bytecode caching and predictive warming to virtually eliminate cold starts. The function stays ready even during low-traffic periods.

When to Use Edge vs Serverless

Use Edge Functions for:

  • API routes that need global low latency
  • Authentication and session validation
  • Personalization based on location
  • Feature flag evaluation
  • A/B test assignment
  • Simple data transformations

Use Serverless Functions for:

  • Heavy computation (image processing, ML inference)
  • Database connections requiring persistent sockets
  • Full Node.js API requirements
  • Long-running operations (> 30 seconds)
  • Large memory requirements (> 128MB)

Hybrid Approach

Many applications benefit from both. Use edge for the fast path and serverless for heavy lifting:

// app/api/product/[id]/route.ts
export const runtime = 'edge';
 
export async function GET(
  request: Request,
  { params }: { params: { id: string } }
) {
  // Quick cache check at the edge
  const cached = await edgeCache.get(params.id);
  if (cached) {
    return Response.json(JSON.parse(cached));
  }
 
  // Fall back to serverless for database query
  const response = await fetch(
    `${process.env.SERVERLESS_API}/products/${params.id}`
  );
  const product = await response.json();
 
  // Cache for next time
  await edgeCache.set(params.id, JSON.stringify(product), { ttl: 3600 });
 
  return Response.json(product);
}

Real-World Impact

After migrating our e-commerce checkout flow to edge:

Before (Serverless in us-east-1):

  • Average latency: 180ms globally
  • P99 latency: 450ms
  • Conversion rate: 3.2%

After (Edge Functions):

  • Average latency: 18ms globally
  • P99 latency: 45ms
  • Conversion rate: 3.9%

The 22% improvement in conversion correlates with the latency reduction. Users don't wait, so they don't abandon.

Limitations to Consider

Edge Functions have constraints:

No persistent connections: You can't maintain database connection pools. Use HTTP-based database APIs (Vercel Postgres, PlanetScale, Supabase).

Limited Node.js APIs: File system, child processes, and some crypto APIs aren't available. Stick to Web APIs.

Size limits: 1MB request/response body, 128MB memory. Not suitable for large file processing.

30-second timeout: Long operations need different architecture.

For applications that fit within these constraints, the performance benefits are substantial. The code runs where users are, physics works in your favor, and the infrastructure scales automatically. That's the edge computing value proposition.