Documentation
Welcome to Unified AI. This guide walks you through creating an API key, making your first request, and understanding rate limits.
Quickstart
Three steps to your first response:
- Create a free account.
- Generate an API key from your dashboard.
- Send a request to
https://transcribe.h0yx2rtus9gfkf.flashpanel.link/api/v1/chat/completions.
curl https://transcribe.h0yx2rtus9gfkf.flashpanel.link/api/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_KADEGATE_KEY" \
-d '{
"model": "deepseek-v4-flash",
"messages": [{"role":"user","content":"Hello!"}]
}'
Authentication
All requests must include an Authorization header with your secret API key:
Authorization: Bearer uai_xxxxxxxxxxxxxxxxxxxxxxxx
API keys are tied to your account and inherit your plan limits. Treat them like passwords — never commit them to source control.
API Keys
Manage keys from the Keys page in your dashboard. You can:
- Create new keys per project
- Revoke keys instantly
- View per-key usage history
Models
Pass the id exactly as shown — no aliasing, no remapping.
| Model ID | Context |
|---|---|
| deepseek-v4-flash | 1M |
| google/gemma-4-26b-a4b-it | 262K |
| meta-llama/llama-4-maverick | 1M |
| minimax/minimax-m2.7 | 196K |
| mistralai/mistral-small-2603 | 262K |
| qwen/qwen3.5-35b-a3b | 262K |
| x-ai/grok-4.1-fast | 2M |
| xiaomi/mimo-v2-flash | 262K |
| z-ai/glm-4.7-flash | 262K |
| Kimi-K2.6 | 262K |
| FW-MiniMax-M2.5 | 196K |
Rate limits
Limits are enforced per user, per 5-hour window:
| Plan | Requests / 5-hour window | Token cap |
|---|---|---|
| Free | 50 | Unlimited |
| Pro | 1,000 | Unlimited |
| Max | 2,000 | Unlimited |
When you exceed your limit, the API responds with 429 Too Many Requests. Back off and retry after Retry-After seconds.
Errors
Errors follow OpenAI's response shape:
{
"error": {
"message": "Invalid API key provided.",
"type": "invalid_request_error",
"code": "invalid_api_key"
}
}
Common status codes:
- 401 — missing or invalid key
- 403 — model not allowed on your plan
- 429 — rate limit exceeded
- 500/502 — upstream provider error
OpenAI-Compatible integrations
Because Unified AI speaks the OpenAI API format, any tool that supports a custom base URL works out of the box. Choose your tool below.
Claude Code
Claude Code supports custom OpenAI-compatible providers via environment variables or the --model flag.
Option A — environment variables
export ANTHROPIC_BASE_URL=https://transcribe.h0yx2rtus9gfkf.flashpanel.link/api/v1 export ANTHROPIC_API_KEY=YOUR_KADEGATE_KEY claude --model deepseek-v4-flash
Option B — claude config
claude config set apiBaseUrl https://transcribe.h0yx2rtus9gfkf.flashpanel.link/api/v1 claude config set apiKey YOUR_KADEGATE_KEY
Then run normally: claude. Pass --model deepseek-v4-flash to select any model from the list above.
Cline (VS Code extension)
Cline has first-class support for OpenAI-compatible APIs.
- Open VS Code → Cline sidebar → Settings (gear icon).
- Set API Provider to
OpenAI Compatible. - Set Base URL to
https://transcribe.h0yx2rtus9gfkf.flashpanel.link/api/v1. - Set API Key to your Unified AI key.
- Set Model to any ID from the models table — e.g.
deepseek-v4-flash. - Click Save.
# Equivalent cline settings.json fragment
{
"cline.apiProvider": "openai",
"cline.openAiBaseUrl": "https://transcribe.h0yx2rtus9gfkf.flashpanel.link/api/v1",
"cline.openAiApiKey": "YOUR_KADEGATE_KEY",
"cline.openAiModelId": "deepseek-v4-flash"
}
OpenCode
OpenCode reads provider config from ~/.config/opencode/config.json.
{
"providers": {
"Unified AI": {
"type": "openai",
"baseUrl": "https://transcribe.h0yx2rtus9gfkf.flashpanel.link/api/v1",
"apiKey": "YOUR_KADEGATE_KEY"
}
},
"model": "Unified AI/deepseek-v4-flash"
}
Then launch OpenCode — it will pick up the provider automatically. Use opencode --model "Unified AI/deepseek-v4-flash" to override at runtime.
Continue (VS Code / JetBrains)
Add a custom model block to ~/.continue/config.json:
{
"models": [
{
"title": "Unified AI",
"provider": "openai",
"model": "deepseek-v4-flash",
"apiBase": "https://transcribe.h0yx2rtus9gfkf.flashpanel.link/api/v1",
"apiKey": "YOUR_KADEGATE_KEY"
}
]
}
Reload VS Code / the IDE. The model will appear in the Continue model picker. You can add multiple entries — one per model you want to switch between.
OpenAI CLI / shell
Point the official openai Python CLI at Unified AI with two environment variables:
export OPENAI_API_BASE=https://transcribe.h0yx2rtus9gfkf.flashpanel.link/api/v1 export OPENAI_API_KEY=YOUR_KADEGATE_KEY # Chat openai api chat_completions.create \ -m deepseek-v4-flash \ -g user "Hello, world"
Or with the newer openai v1 SDK:
from openai import OpenAI
client = OpenAI(
base_url="https://transcribe.h0yx2rtus9gfkf.flashpanel.link/api/v1",
api_key="YOUR_KADEGATE_KEY",
)
r = client.chat.completions.create(
model="deepseek-v4-flash",
messages=[{"role": "user", "content": "Hello"}],
)
print(r.choices[0].message.content)
Aider
Aider supports OpenAI-compatible endpoints via --openai-api-base:
aider \ --openai-api-base https://transcribe.h0yx2rtus9gfkf.flashpanel.link/api/v1 \ --openai-api-key YOUR_KADEGATE_KEY \ --model deepseek-v4-flash
Or set it permanently in ~/.aider.conf.yml:
openai-api-base: https://transcribe.h0yx2rtus9gfkf.flashpanel.link/api/v1 openai-api-key: YOUR_KADEGATE_KEY model: deepseek-v4-flash
Aider will use streaming automatically. All models in the list above are compatible.
OpenClaw
OpenClaw uses a YAML config file. Add Unified AI as a backend:
# ~/.openclaw/config.yaml
backends:
- name: Unified AI
type: openai_compatible
base_url: https://transcribe.h0yx2rtus9gfkf.flashpanel.link/api/v1
api_key: YOUR_KADEGATE_KEY
default_model: deepseek-v4-flash
active_backend: Unified AI
Start OpenClaw — it will route all requests through Unified AI. Switch models with /model <id> inside a session.
Hermes Agent
Hermes Agent reads provider settings from its config or environment:
export HERMES_OPENAI_BASE_URL=https://transcribe.h0yx2rtus9gfkf.flashpanel.link/api/v1 export HERMES_OPENAI_API_KEY=YOUR_KADEGATE_KEY export HERMES_MODEL=deepseek-v4-flash hermes run
Or in hermes.config.json:
{
"provider": "openai",
"openai": {
"baseUrl": "https://transcribe.h0yx2rtus9gfkf.flashpanel.link/api/v1",
"apiKey": "YOUR_KADEGATE_KEY",
"model": "deepseek-v4-flash"
}
}
Hermes supports tool calling and streaming — both work transparently through Unified AI.