openai-whisper-apiClaude Skill
Transcribe audio via OpenAI Audio Transcriptions API (Whisper).
269.8k Stars
51.5k Forks
2025/11/24
| name | openai-whisper-api |
| description | Transcribe audio via OpenAI Audio Transcriptions API (Whisper). |
| homepage | https://platform.openai.com/docs/guides/speech-to-text |
| metadata | {"openclaw":{"emoji":"☁️","requires":{"bins":["curl"],"env":["OPENAI_API_KEY"]},"primaryEnv":"OPENAI_API_KEY"}} |
OpenAI Whisper API (curl)
Transcribe an audio file via OpenAI’s /v1/audio/transcriptions endpoint.
Quick start
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a
Defaults:
- Model:
whisper-1 - Output:
<input>.txt
Useful flags
{baseDir}/scripts/transcribe.sh /path/to/audio.ogg --model whisper-1 --out /tmp/transcript.txt {baseDir}/scripts/transcribe.sh /path/to/audio.m4a --language en {baseDir}/scripts/transcribe.sh /path/to/audio.m4a --prompt "Speaker names: Peter, Daniel" {baseDir}/scripts/transcribe.sh /path/to/audio.m4a --json --out /tmp/transcript.json
API key
Set OPENAI_API_KEY, or configure it in ~/.openclaw/openclaw.json:
{ skills: { "openai-whisper-api": { apiKey: "OPENAI_KEY_HERE", }, }, }
Similar Claude Skills & Agent Workflows
manifest
3.6k
Smart LLM Router for OpenClaw.
clawrouter
4.7k
Smart LLM router — save 67% on inference costs.
chatgpt-app-builder
9.4k
DEPRECATED: This skill has been replaced by `mcp-app-builder`.
use-local-whisper
19.5k
Use when the user wants local voice transcription instead of OpenAI Whisper API.
add-voice-transcription
19.5k
Add voice message transcription to NanoClaw using OpenAI's Whisper API.
add-ollama-tool
19.5k
Add Ollama MCP server so the container agent can call local models for cheaper/faster tasks like summarization, translation, or general queries.