vercel-reliability-patternsClaude Skill

Implement Vercel reliability patterns including circuit breakers, idempotency, and graceful degradation.

1.9k Stars
259 Forks
2025/10/10

Install & Download

Linux / macOS:

请登录后查看安装命令

Windows (PowerShell):

请登录后查看安装命令

Download and extract to ~/.claude/skills/

namevercel-reliability-patterns
descriptionImplement reliability patterns for Vercel deployments including circuit breakers, retry logic, and graceful degradation. Use when building fault-tolerant serverless functions, implementing retry strategies, or adding resilience to production Vercel services. Trigger with phrases like "vercel reliability", "vercel circuit breaker", "vercel resilience", "vercel fallback", "vercel graceful degradation".
allowed-toolsRead, Write, Edit
version1.0.0
licenseMIT
authorJeremy Longshore <jeremy@intentsolutions.io>
compatible-withclaude-code, codex, openclaw
tags["saas","vercel","reliability","resilience","patterns"]

Vercel Reliability Patterns

Overview

Build fault-tolerant Vercel deployments with circuit breakers, retry logic, graceful degradation, and instant rollback integration. Addresses reliability at two levels: function-level resilience (protecting against dependency failures) and deployment-level resilience (protecting against bad deploys).

Prerequisites

  • Vercel project deployed to production
  • Understanding of failure modes in serverless
  • External dependencies (databases, APIs) identified

Instructions

Step 1: Circuit Breaker for External Dependencies

// lib/circuit-breaker.ts
type CircuitState = 'CLOSED' | 'OPEN' | 'HALF_OPEN';

class CircuitBreaker {
  private state: CircuitState = 'CLOSED';
  private failures = 0;
  private lastFailure = 0;
  private readonly threshold: number;
  private readonly resetTimeMs: number;

  constructor(threshold = 5, resetTimeMs = 30000) {
    this.threshold = threshold;
    this.resetTimeMs = resetTimeMs;
  }

  async call<T>(fn: () => Promise<T>, fallback: () => T): Promise<T> {
    if (this.state === 'OPEN') {
      if (Date.now() - this.lastFailure > this.resetTimeMs) {
        this.state = 'HALF_OPEN';
      } else {
        console.warn('Circuit OPEN — returning fallback');
        return fallback();
      }
    }

    try {
      const result = await fn();
      this.onSuccess();
      return result;
    } catch (error) {
      this.onFailure();
      console.error('Circuit breaker caught error:', error);
      return fallback();
    }
  }

  private onSuccess(): void {
    this.failures = 0;
    this.state = 'CLOSED';
  }

  private onFailure(): void {
    this.failures++;
    this.lastFailure = Date.now();
    if (this.failures >= this.threshold) {
      this.state = 'OPEN';
      console.warn(`Circuit OPENED after ${this.failures} failures`);
    }
  }
}

// Usage in a serverless function:
const dbCircuit = new CircuitBreaker(3, 30000);

export default async function handler(req, res) {
  const users = await dbCircuit.call(
    () => db.user.findMany({ take: 10 }),
    () => [] // Fallback: empty array when DB is down
  );
  res.json({ users, degraded: users.length === 0 });
}

Important for serverless: Circuit breaker state lives in a single function instance. Different instances have independent circuits. For global circuit state, use Vercel KV or Edge Config.

Step 2: Retry with Exponential Backoff

// lib/retry.ts
interface RetryOptions {
  maxRetries?: number;
  baseDelayMs?: number;
  maxDelayMs?: number;
  retryOn?: (error: unknown) => boolean;
}

async function withRetry<T>(
  fn: () => Promise<T>,
  options: RetryOptions = {}
): Promise<T> {
  const { maxRetries = 3, baseDelayMs = 200, maxDelayMs = 5000, retryOn } = options;

  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    try {
      return await fn();
    } catch (error) {
      if (attempt === maxRetries) throw error;
      if (retryOn && !retryOn(error)) throw error;

      const delay = Math.min(
        baseDelayMs * Math.pow(2, attempt) + Math.random() * 200,
        maxDelayMs
      );
      await new Promise(r => setTimeout(r, delay));
    }
  }
  throw new Error('Unreachable');
}

// Usage:
const data = await withRetry(
  () => fetch('https://api.example.com/data').then(r => {
    if (!r.ok) throw new Error(`HTTP ${r.status}`);
    return r.json();
  }),
  {
    maxRetries: 3,
    retryOn: (err) => {
      // Only retry on network errors and 5xx, not 4xx
      if (err instanceof TypeError) return true; // network error
      return err.message?.includes('5');
    },
  }
);

Step 3: Graceful Degradation with Stale Cache

// api/products.ts — serve stale data when primary source is down
import { get, set } from '@vercel/kv';

export default async function handler(req, res) {
  const cacheKey = 'products:latest';

  try {
    // Try primary data source
    const freshData = await fetchProductsFromDB();

    // Update cache with fresh data
    await set(cacheKey, JSON.stringify(freshData), { ex: 3600 });

    res.setHeader('x-data-source', 'live');
    res.json(freshData);
  } catch (error) {
    // Primary failed — serve stale cache
    const cachedData = await get(cacheKey);

    if (cachedData) {
      console.warn('Serving stale cache — primary source unavailable');
      res.setHeader('x-data-source', 'cache-stale');
      res.json(JSON.parse(cachedData as string));
    } else {
      // No cache available — return degraded response
      res.setHeader('x-data-source', 'degraded');
      res.status(503).json({
        error: 'Service temporarily unavailable',
        degraded: true,
      });
    }
  }
}

Step 4: Idempotency Keys for Mutations

// api/orders/route.ts — idempotent order creation
import { NextRequest, NextResponse } from 'next/server';
import { db } from '@/lib/db';

export async function POST(request: NextRequest) {
  const idempotencyKey = request.headers.get('idempotency-key');
  if (!idempotencyKey) {
    return NextResponse.json(
      { error: 'idempotency-key header required' },
      { status: 400 }
    );
  }

  // Check if this request was already processed
  const existing = await db.idempotencyRecord.findUnique({
    where: { key: idempotencyKey },
  });

  if (existing) {
    // Return the cached response — same status and body
    return NextResponse.json(JSON.parse(existing.responseBody), {
      status: existing.responseStatus,
      headers: { 'x-idempotent-replay': 'true' },
    });
  }

  // Process the order
  const body = await request.json();
  const order = await db.order.create({ data: body });

  // Cache the response for idempotency
  const responseBody = JSON.stringify({ order });
  await db.idempotencyRecord.create({
    data: { key: idempotencyKey, responseStatus: 201, responseBody },
  });

  return NextResponse.json({ order }, { status: 201 });
}

Step 5: Health Check with Dependency Status

// api/health/route.ts
export const dynamic = 'force-dynamic';

interface HealthCheck {
  name: string;
  check: () => Promise<boolean>;
}

const checks: HealthCheck[] = [
  {
    name: 'database',
    check: async () => {
      await db.$queryRaw`SELECT 1`;
      return true;
    },
  },
  {
    name: 'cache',
    check: async () => {
      await kv.ping();
      return true;
    },
  },
  {
    name: 'external-api',
    check: async () => {
      const r = await fetch('https://api.example.com/health', { signal: AbortSignal.timeout(3000) });
      return r.ok;
    },
  },
];

export async function GET() {
  const results: Record<string, 'ok' | 'error'> = {};

  await Promise.all(
    checks.map(async ({ name, check }) => {
      try {
        await check();
        results[name] = 'ok';
      } catch {
        results[name] = 'error';
      }
    })
  );

  const healthy = Object.values(results).every(v => v === 'ok');
  return Response.json(
    { status: healthy ? 'healthy' : 'degraded', checks: results },
    { status: healthy ? 200 : 503 }
  );
}

Step 6: Deployment-Level Resilience

# Instant rollback on health check failure (CI integration)
DEPLOY_URL=$(vercel --prod)
HEALTH=$(curl -s -o /dev/null -w "%{http_code}" "$DEPLOY_URL/api/health")

if [ "$HEALTH" != "200" ]; then
  echo "Health check failed ($HEALTH) — rolling back"
  vercel rollback
  exit 1
fi
echo "Deployment healthy"

Reliability Patterns Summary

PatternProtects AgainstVercel Implementation
Circuit breakerDependency degradationIn-function state or Edge Config
Retry + backoffTransient failureswithRetry wrapper
Stale cachePrimary source outageVercel KV with TTL
IdempotencyDuplicate mutationsDatabase record per request
Health checksBad deployments/api/health + rollback automation
Instant rollbackDeployment regressionvercel rollback in CI

Output

  • Circuit breaker protecting all external dependency calls
  • Retry logic with exponential backoff for transient failures
  • Graceful degradation serving stale data when primary fails
  • Idempotency preventing duplicate mutations
  • Automated health check + rollback pipeline

Error Handling

ErrorCauseSolution
Circuit opens too aggressivelyThreshold too lowIncrease failure threshold (e.g., 5 → 10)
Retry causes duplicate side effectsNo idempotencyAdd idempotency-key to mutation endpoints
Stale cache expiredTTL too short or never populatedIncrease TTL, seed cache on deploy
Health check false positiveTimeout too shortIncrease AbortSignal timeout to 5s
Rollback reverts good deploymentFlaky health checkAdd retry to health check before rollback

Resources

Next Steps

For policy guardrails, see vercel-policy-guardrails.

Similar Claude Skills & Agent Workflows