Claude Skills Library
Discover and install awesome Claude skills and agent workflows to supercharge your AI coding experience.
test-coverage-improver
Improve test coverage in the OpenAI Agents Python repository: run `make coverage`, inspect coverage artifacts, identify low-coverage files, propose high-impact tests, and confirm with the user before writing tests.
pr-draft-summary
Create a PR title and draft description after substantive code changes are finished.
openai-knowledge
Use when working with the OpenAI API (Responses API) or OpenAI platform features (tools, streaming, Realtime API, auth, models, rate limits, MCP) and you need authoritative, up-to-date documentation (schemas, examples, limits, edge cases).
final-release-review
Perform a release-readiness review by locating the previous release tag from remote tags and auditing the diff (e.g., v1.2.3...<commit>) for breaking changes, regressions, improvement opportunities, and risks before releasing openai-agents-python.
examples-auto-run
Run python examples in auto mode with logging, rerun helpers, and background control.
docs-sync
Analyze main branch implementation and configuration to find missing, incorrect, or outdated documentation in docs/.
code-change-verification
Run the mandatory verification stack when changes affect runtime code, tests, or build/test behavior in the OpenAI Agents Python repository.
testing-python
Write and evaluate effective Python tests using pytest.
reviewing-code
Review code for quality, maintainability, and correctness.
foo-skill
A dummy skill that returns a fixed response.
fetch-unresolved-comments
Fetch unresolved PR review comments using GitHub GraphQL API, filtering out resolved and outdated feedback.
copilot
Hand off a task to GitHub Copilot.