retellai-reliability-patternsClaude Skill
Implement Retell AI reliability patterns including circuit breakers, idempotency, and graceful degradation.
1.9k Stars
259 Forks
2025/10/10
| name | retellai-reliability-patterns |
| description | Retell AI reliability patterns — AI voice agent and phone call automation. Use when working with Retell AI for voice agents, phone calls, or telephony. Trigger with phrases like "retell reliability patterns", "retellai-reliability-patterns", "voice agent". |
| allowed-tools | Read, Write, Edit, Bash(npm:*), Bash(curl:*), Grep |
| version | 2.0.0 |
| license | MIT |
| author | Jeremy Longshore <jeremy@intentsolutions.io> |
| tags | ["saas","retellai","voice","telephony","ai-agents"] |
| compatible-with | claude-code, codex, openclaw |
Retell AI Reliability Patterns
Overview
Implementation patterns for Retell AI reliability patterns — voice agent and telephony platform.
Prerequisites
- Completed
retellai-install-authsetup
Instructions
Step 1: SDK Pattern
import Retell from 'retell-sdk'; const retell = new Retell({ apiKey: process.env.RETELL_API_KEY! }); const agents = await retell.agent.list(); console.log(`Agents: ${agents.length}`);
Output
- Retell AI integration for reliability patterns
Error Handling
| Error | Cause | Solution |
|---|---|---|
| 401 Unauthorized | Invalid API key | Check RETELL_API_KEY |
| 429 Rate Limited | Too many requests | Implement backoff |
| 400 Bad Request | Invalid parameters | Check API documentation |
Resources
Next Steps
See related Retell AI skills for more workflows.
Similar Claude Skills & Agent Workflows
manifest
3.6k
Smart LLM Router for OpenClaw.
clawrouter
4.7k
Smart LLM router — save 67% on inference costs.
chatgpt-app-builder
9.4k
DEPRECATED: This skill has been replaced by `mcp-app-builder`.
use-local-whisper
19.5k
Use when the user wants local voice transcription instead of OpenAI Whisper API.
add-voice-transcription
19.5k
Add voice message transcription to NanoClaw using OpenAI's Whisper API.
add-ollama-tool
19.5k
Add Ollama MCP server so the container agent can call local models for cheaper/faster tasks like summarization, translation, or general queries.