sherpa-onnx-ttsClaude Skill
Local text-to-speech via sherpa-onnx (offline, no cloud)
269.8k Stars
51.5k Forks
2025/11/24
| name | sherpa-onnx-tts |
| description | Local text-to-speech via sherpa-onnx (offline, no cloud) |
| metadata | {"openclaw":{"emoji":"🗣️","os":["darwin","linux","win32"],"requires":{"env":["SHERPA_ONNX_RUNTIME_DIR","SHERPA_ONNX_MODEL_DIR"]},"install":[{"id":"download-runtime-macos","kind":"download","os":["darwin"],"url":"https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.12.23/sherpa-onnx-v1.12.23-osx-universal2-shared.tar.bz2","archive":"tar.bz2","extract":true,"stripComponents":1,"targetDir":"runtime","label":"Download sherpa-onnx runtime (macOS)"},{"id":"download-runtime-linux-x64","kind":"download","os":["linux"],"url":"https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.12.23/sherpa-onnx-v1.12.23-linux-x64-shared.tar.bz2","archive":"tar.bz2","extract":true,"stripComponents":1,"targetDir":"runtime","label":"Download sherpa-onnx runtime (Linux x64)"},{"id":"download-runtime-win-x64","kind":"download","os":["win32"],"url":"https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.12.23/sherpa-onnx-v1.12.23-win-x64-shared.tar.bz2","archive":"tar.bz2","extract":true,"stripComponents":1,"targetDir":"runtime","label":"Download sherpa-onnx runtime (Windows x64)"},{"id":"download-model-lessac","kind":"download","url":"https://github.com/k2-fsa/sherpa-onnx/releases/download/tts-models/vits-piper-en_US-lessac-high.tar.bz2","archive":"tar.bz2","extract":true,"targetDir":"models","label":"Download Piper en_US lessac (high)"}]}} |
sherpa-onnx-tts
Local TTS using the sherpa-onnx offline CLI.
Install
- Download the runtime for your OS (extracts into
~/.openclaw/tools/sherpa-onnx-tts/runtime) - Download a voice model (extracts into
~/.openclaw/tools/sherpa-onnx-tts/models)
Update ~/.openclaw/openclaw.json:
{ skills: { entries: { "sherpa-onnx-tts": { env: { SHERPA_ONNX_RUNTIME_DIR: "~/.openclaw/tools/sherpa-onnx-tts/runtime", SHERPA_ONNX_MODEL_DIR: "~/.openclaw/tools/sherpa-onnx-tts/models/vits-piper-en_US-lessac-high", }, }, }, }, }
The wrapper lives in this skill folder. Run it directly, or add the wrapper to PATH:
export PATH="{baseDir}/bin:$PATH"
Usage
{baseDir}/bin/sherpa-onnx-tts -o ./tts.wav "Hello from local TTS."
Notes:
- Pick a different model from the sherpa-onnx
tts-modelsrelease if you want another voice. - If the model dir has multiple
.onnxfiles, setSHERPA_ONNX_MODEL_FILEor pass--model-file. - You can also pass
--tokens-fileor--data-dirto override the defaults. - Windows: run
node {baseDir}\\bin\\sherpa-onnx-tts -o tts.wav "Hello from local TTS."
Similar Claude Skills & Agent Workflows
manifest
3.6k
Smart LLM Router for OpenClaw.
clawrouter
4.7k
Smart LLM router — save 67% on inference costs.
chatgpt-app-builder
9.4k
DEPRECATED: This skill has been replaced by `mcp-app-builder`.
use-local-whisper
19.5k
Use when the user wants local voice transcription instead of OpenAI Whisper API.
add-voice-transcription
19.5k
Add voice message transcription to NanoClaw using OpenAI's Whisper API.
add-ollama-tool
19.5k
Add Ollama MCP server so the container agent can call local models for cheaper/faster tasks like summarization, translation, or general queries.