Admin API Reference
The Admin API provides complete system configuration capabilities for the owner account, including Provider key pool management, model mapping, Level routing, pricing override, and broadcast notifications. All endpoints require owner privileges.
Base Information
Base URL: https://api.xaixapi.com
Authentication: All requests require owner account API Key
Authorization: Bearer sk-Xvs...
Permission: Owner account only (isOwner=true)
Frontend Console: Admin Console / Production
Endpoint Overview
| Module | Endpoint | Description |
|---|---|---|
| Provider Key Management | GET/POST/PUT/DELETE /x-keys | Manage upstream AI Provider key pool |
GET /x-conf | View keys, mappings and sleeping status | |
POST /x-conf | Batch configuration management (cascades to all descendants) | |
| System Configuration | GET/PUT/DELETE /x-config | Manage model mapping, Level routing, rate limits etc. |
| Broadcast Notifications | POST/DELETE /x-news | Publish system or targeted notifications |
Core Concepts
Understanding these core concepts is critical for properly configuring the system before using Admin API.
ModelMapper (Model Mapping)
Purpose: Redirect user-requested model names to another model for model replacement and cost optimization.
How it Works:
- User requests
gpt-3.5-turbo→ System actually callsgpt-4o-mini - Completely transparent to users, they don't notice the model swap
- Supports wildcard matching (e.g.,
gpt-3.5*matches all gpt-3.5 series)
Typical Use Cases:
Cost Optimization - Map expensive models to cheaper ones
o1-preview=gpt-4o # Downgrade o1 requests to gpt-4o gpt-4-turbo=gpt-4o-mini # Save 90% costSmooth Migration - Gradually switch to new models
gpt-4=gpt-4o # Switch all to new versionModel Unification - Standardize on specific models
gpt-3.5*=gpt-4o-mini # Unify all 3.5 requests to mini claude-3-haiku=gpt-4o-mini
Configuration:
# Via POST /x-conf (cascades to all descendants)
curl -X POST https://api.xaixapi.com/x-conf \
-H "Authorization: Bearer $API_KEY" \
-d '{"ModelMapper": "gpt-3.5*=gpt-4o-mini, o*=gpt-4o"}'
# Or via PUT /x-config (Owner only)
curl -X PUT https://api.xaixapi.com/x-config \
-H "Authorization: Bearer $API_KEY" \
-d '{"MODEL_MAPPER": "gpt-3.5*=gpt-4o-mini"}'
Notes:
- ✅ Cascading: Config via
/x-confauto-syncs to all descendants - ✅ Smart Merge: Won't overwrite user-defined mappings
- ⚠️ Billing Impact: Charges based on target model, watch costs
- ⚠️ Capability Differences: Ensure target model meets business needs
LevelMapper (Level Mapping)
Purpose: Define model-to-Level routing rules, controlling which Provider key pool requests flow to.
How it Works:
- Level is a key grouping concept, each key belongs to a Level
- LevelMapper defines "which models should use which Level's keys"
- System selects key pool based on model name matching rules
Level Numbering Convention:
Level 1: Primary Provider (e.g., Official OpenAI)
Level 2: Backup Provider (e.g., Proxy service)
Level 3: Cold Backup Provider (e.g., Azure OpenAI)
Level 4+: Special Provider (e.g., Vertex AI)
Typical Use Cases:
Route by Service Provider
gpt-4*=1 # OpenAI models use Level 1 (official keys) claude*=2 # Claude models use Level 2 (proxy keys) gemini*=3 # Gemini uses Level 3 (Vertex AI)Route by Cost
gpt-4o=1 # High-cost models use official gpt-4o-mini=2 # Low-cost models use proxyRoute by Reliability
o1*=1 # Critical models use most stable Provider *=2 # Other models use standard Provider
Configuration:
# Via POST /x-conf (system-level, shared by all descendants)
curl -X POST https://api.xaixapi.com/x-conf \
-H "Authorization: Bearer $API_KEY" \
-d '{"LevelMapper": "gpt-4*=1, claude*=2, gemini*=3"}'
Key Features:
- 🔒 System-Level: All descendants share Owner's LevelMapper, cannot override individually
- 🔄 Auto-Complete: When adding keys, system auto-completes LevelMapper based on Provider
- 🎯 Wildcard Match: Supports patterns like
gpt-4*,claude-3-* - 🔀 Works with SwitchOver: Automatically switches to backup Level on failure
Difference from ModelMapper:
| Config | Purpose | Scope | Override |
|---|---|---|---|
| ModelMapper | Model name replacement | Cascading | Users can customize |
| LevelMapper | Route to Level | System-level shared | Users cannot override |
ModelFailover (Model Failover)
Purpose: Define model-level failover strategy, automatically switching to backup models when primary model is unavailable.
How it Works:
- Primary model request fails (key invalid, rate limited, service down)
- System automatically retries backup model list
- Tries in order until success or all backups fail
Config Format:
primary_model=backup1|backup2|backup3
Typical Use Cases:
High Availability
gpt-4o=gpt-4o-mini|gpt-4-turbo claude-3-opus=claude-3-sonnet|gpt-4oCost Degradation
o1-preview=gpt-4o|gpt-4o-mini # Downgrade expensive model on failureCross-Provider Tolerance
gpt-4o=claude-3-opus # Switch to Anthropic on OpenAI failure
Configuration:
curl -X POST https://api.xaixapi.com/x-conf \
-H "Authorization: Bearer $API_KEY" \
-d '{"ModelFailover": "gpt-4o=gpt-4o-mini|gpt-4-turbo"}'
Difference from SwitchOver:
| Config | Level | Trigger Condition | Switches To |
|---|---|---|---|
| ModelFailover | Model-level | Model request fails | Backup model |
| SwitchOver | Level-level | All Level keys invalid | Backup Level |
SwitchOver (Level Switching)
Purpose: Define Level-level failover, automatically switching to backup Level when all keys in a Level are unavailable.
How it Works:
- All keys in Level 1 become invalid/sleeping
- System automatically forwards requests to Level 2
- Level 2 keys handle requests
Config Format:
primary_level=backup_level
Typical Use Cases:
Multi-Layer Tolerance
1=2 # Switch to Level 2 on Level 1 failure 2=3 # Switch to Level 3 on Level 2 failureOfficial to Proxy
1=2 # Switch to proxy service on official API failurePrimary-Backup Architecture
1=2, 2=1 # Level 1 and 2 are mutual backups
Configuration:
curl -X PUT https://api.xaixapi.com/x-config \
-H "Authorization: Bearer $API_KEY" \
-d '{"SWITCH_OVER": "1=2, 2=3"}'
Multi-Layer Tolerance Example:
Config: 1=2, 2=3
Flow: Request → Level 1 (fail) → Level 2 (fail) → Level 3 (success)
ModelLimits (Model Rate Limits)
Purpose: Set rate limits for specific models to prevent exceeding Provider limits or control costs.
Rate Limit Dimensions:
- RPM - Requests Per Minute
- RPH - Requests Per Hour
- RPD - Requests Per Day
- TPM - Tokens Per Minute
- TPH - Tokens Per Hour
- TPD - Tokens Per Day
Typical Use Cases:
Match Provider Limits
{ "gpt-4o": {"rpm": 500, "tpm": 150000}, // Match OpenAI Tier 2 "claude-3-opus": {"rpm": 50, "tpm": 40000} // Match Anthropic limits }Cost Control
{ "o1-preview": {"rpm": 10, "rpd": 100} // Limit expensive model usage }Fair Scheduling
{ "gpt-4o": {"rpm": 100}, // Prevent single user monopoly "gpt-4o-mini": {"rpm": 500} }
Configuration:
# Method 1: Via POST /x-conf (cascades to all descendants)
curl -X POST https://api.xaixapi.com/x-conf \
-H "Authorization: Bearer $API_KEY" \
-d '{
"ModelLimits": {
"gpt-4o": {"rpm": 30, "tpm": 90000},
"claude-3-opus": {"rpm": 20, "tpm": 60000}
}
}'
# Method 2: Via PUT /x-config (Owner only)
curl -X PUT https://api.xaixapi.com/x-config \
-H "Authorization: Bearer $API_KEY" \
-d '{"MODEL_LIMITS": "{\"gpt-4o\": {\"rpm\": 30}}"}'
Rate Limit Priority:
User-level ModelLimits > Owner ModelLimits > System defaults
Clear Limits:
# Clear all limits
curl -X POST https://api.xaixapi.com/x-conf \
-d '{"ModelLimits": "*"}'
# Remove specific model limits
curl -X POST https://api.xaixapi.com/x-conf \
-d '{"ModelLimits": "-gpt-4o, -claude-3-opus"}'
Resources (Resource Allowlist)
Purpose: Restrict API endpoints users can access for fine-grained permission control.
Typical Use Cases:
Chat Only
/v1/chat/completionsNo Image Generation
/v1/chat/completions, /v1/embeddings, /v1/audio/speechFull Access
/v1/chat/completions, /v1/embeddings, /v1/images/generations, /v1/audio/*
Configuration:
# Owner-level config
curl -X PUT https://api.xaixapi.com/x-config \
-H "Authorization: Bearer $API_KEY" \
-d '{"RESOURCES": "/v1/chat/completions, /v1/embeddings"}'
# User-level config (via PUT /x-users/{id})
curl -X PUT https://api.xaixapi.com/x-users/42 \
-H "Authorization: Bearer $API_KEY" \
-d '{"Resources": "/v1/chat/completions"}'
Permission Inheritance:
- Subaccount Resources must be a subset of parent's Resources
- Subaccounts cannot access endpoints not opened by parent
Provider Key Management
Key States:
- Active (Status: true) - Key available, participating in load balancing
- Disabled (Status: false) - Manually disabled, not participating in requests
- Sleeping - Auto-sleeping due to repeated failures, auto-recovers after time
Sleep Mechanism:
- Key fails N consecutive times (e.g., 3 times) triggers sleep
- Sleep time increases: 5min → 15min → 30min → ...
- Key not used during sleep
- Auto-recovers after sleep, rejoins load pool
View Sleeping Keys:
curl -H "Authorization: Bearer $API_KEY" \
https://api.xaixapi.com/x-conf | jq '.SleepingKeys'
Delete All Sleeping Keys (Immediate Recovery):
curl -X POST https://api.xaixapi.com/x-conf \
-H "Authorization: Bearer $API_KEY" \
-d '{"DelKeys": true}'
Reload Keys:
# Async reload all keys (refresh status, config, etc.)
curl -X POST https://api.xaixapi.com/x-conf \
-H "Authorization: Bearer $API_KEY" \
-d '{"LoadKeys": true}'
Configuration Priority Summary
Global Config (High to Low):
- User-level config (User ModelMapper, ModelLimits, Resources)
- Owner-level config (Owner ModelMapper, ModelLimits)
- System-level config (LevelMapper, system defaults)
Request Routing Flow:
User requests Model A
↓
Apply User ModelMapper → Model B
↓
Apply Owner ModelMapper → Model C
↓
Query LevelMapper → Level 2
↓
Check ModelLimits → Pass
↓
Select available key from Level 2 pool
↓
Send request to Provider
↓
Failed? Apply ModelFailover → Model D
↓
All Level 2 keys invalid? Apply SwitchOver → Level 3
1. Provider Key Management API
Manage upstream AI Provider key pool, supporting standard Providers (OpenAI, Anthropic, etc.), Azure OpenAI, and Google Vertex AI.
1.1 Query Keys
Endpoint: GET /x-keys
Query Parameters:
id- Filter by key IDlevel- Filter by Level (load pool level)provider- Filter by Provider URLpage/size- Pagination (optional)
Path Filters:
GET /x-keys/{id}- Get by IDGET /x-keys/L{n}- Get by Level (e.g.L2)GET /x-keys/{provider}- Get by Provider (e.g.api.anthropic.com)
Examples:
# Get all keys
curl -H "Authorization: Bearer $API_KEY" \
https://api.xaixapi.com/x-keys
# Get Level 2 keys
curl -H "Authorization: Bearer $API_KEY" \
"https://api.xaixapi.com/x-keys?level=2"
# Get keys of a provider
curl -H "Authorization: Bearer $API_KEY" \
"https://api.xaixapi.com/x-keys?provider=https://api.openai.com"
Response Example:
{
"success": true,
"keys": [
{
"ID": 123,
"Name": "OpenAI Production",
"Level": 1,
"Provider": "https://api.openai.com",
"Status": true,
"CreatedAt": "2025-01-01T00:00:00Z",
"UpdatedAt": "2025-01-15T10:00:00Z"
}
]
}
1.2 Create Key
Endpoint: POST /x-keys
Request Body Fields:
{
"SecretKey": "sk-...", // Upstream API key (required, ≥20 chars)
"Name": "Production - OpenAI", // Display name (optional)
"Level": 1, // Load pool level (required, for routing)
"Provider": "https://api.openai.com", // Upstream API URL (required)
"Status": true, // Enable status (default true)
"Config": { // Special config (optional)
"provider_type": "standard" // standard/azure/vertex
}
}
Configuration Types
1) Standard Configuration
For OpenAI, Anthropic, Mistral, DeepSeek and other standard compatible Providers.
{
"SecretKey": "sk-...",
"Name": "OpenAI Production",
"Level": 1,
"Provider": "https://api.openai.com",
"Status": true,
"Config": {
"provider_type": "standard" // Can be omitted
}
}
2) Azure OpenAI Configuration
{
"SecretKey": "your-azure-api-key",
"Name": "Azure OpenAI",
"Level": 2,
"Provider": "https://your-resource.openai.azure.com",
"Status": true,
"Config": {
"provider_type": "azure",
"model_mapping": {
"gpt-4o": "gpt-4-deployment", // Model name → Deployment name
"gpt-3.5*": "gpt35-turbo", // Supports wildcards
"*": "default-deployment" // Default deployment
},
"api_versions": { // Optional, API version for each endpoint
"chat": "2025-01-01-preview",
"embeddings": "2024-02-01",
"responses": "2025-04-01-preview",
"audio": "2025-03-01-preview",
"images": "2025-04-01-preview"
}
}
}
3) Google Vertex AI Configuration
{
"SecretKey": "sk-placeholder-51chars-xxxxxxxxxxxxxxxxxxxxxxxx",
"Name": "Vertex AI",
"Level": 3,
"Provider": "https://us-central1-aiplatform.googleapis.com",
"Status": true,
"Config": {
"provider_type": "vertex",
"base_url": "https://us-central1-aiplatform.googleapis.com/v1/projects/{project}/locations/{location}",
"project_id": "my-gcp-project",
"client_email": "[email protected]",
"private_key": "-----BEGIN RSA PRIVATE KEY-----\nMIIE...\n-----END RSA PRIVATE KEY-----\n"
}
}
Examples:
# Add standard Provider
curl -X POST https://api.xaixapi.com/x-keys \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"SecretKey": "sk-live-...",
"Name": "OpenAI Main Pool",
"Level": 1,
"Provider": "https://api.openai.com",
"Status": true
}'
# Add Azure OpenAI
curl -X POST https://api.xaixapi.com/x-keys \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"SecretKey": "your-azure-key",
"Name": "Azure GPT-4",
"Level": 2,
"Provider": "https://your-resource.openai.azure.com",
"Status": true,
"Config": {
"provider_type": "azure",
"model_mapping": {"gpt-4o": "gpt-4-deployment"}
}
}'
Behavior Notes:
- After successful creation, system auto-completes
LEVEL_MAPPERbased on Provider - Sensitive fields in Config (e.g. Vertex
private_key) are auto-encrypted - Provider URL cannot self-reference (cannot point to current system)
- Auto-refreshes Redis cache and in-memory structures
1.3 Update Key
Endpoint: PUT /x-keys/{id} or POST /x-keys/{id}
Updatable Fields: Name, Level, Status, Provider, Config, etc.
Example:
curl -X PUT https://api.xaixapi.com/x-keys/123 \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{"Level": 2, "Status": true}'
1.4 Delete Key
Endpoint: DELETE /x-keys/{id}
curl -X DELETE https://api.xaixapi.com/x-keys/123 \
-H "Authorization: Bearer $API_KEY"
1.5 Configuration & Observation
Endpoint: GET /x-conf
Returns current owner account's:
Resources- Allowed API pathsLevelMapper/ModelMapper/ModelLimits/SwitchOver- Sleeping keys list (with remaining sleep time)
- Disabled keys list
UserMinBalance/UserApiBalance- Key thresholds
curl -H "Authorization: Bearer $API_KEY" \
https://api.xaixapi.com/x-conf | jq .
Response Example:
{
"Resources": {
"/v1/chat/completions": {},
"/v1/embeddings": {}
},
"LevelMapper": {
"gpt-4*": 2,
"claude*": 3
},
"ModelMapper": {
"gpt-3.5*": "gpt-4o-mini"
},
"ModelFailover": {
"gpt-4o": ["gpt-4o-mini"]
},
"ModelLimits": {
"gpt-4o": {
"rpm": 30,
"tpm": 90000
}
},
"SwitchOver": {
"1": 2
},
"SleepingKeys": [
{
"ID": 123,
"Name": "OpenAI Prod",
"Level": 1,
"Provider": "https://api.openai.com",
"Status": false,
"SleepTimes": 3
}
],
"DisabledKeys": [
{
"ID": 456,
"Name": "OpenAI Test",
"Level": 2,
"Provider": "https://api.openai.com",
"Status": false
}
],
"UserMinBalance": 1.0,
"UserApiBalance": 0.5
}
1.6 Batch Configuration Management
Endpoint: POST /x-conf
Overview: Owner-level batch configuration management interface that supports updating keys, mappings, and automatically synchronizing to all descendant users.
Key Features:
- Batch Sync - Changes to ModelMapper and ModelLimits are automatically applied to all descendant users
- System-Level Config - LevelMapper is a system-level configuration shared by all descendant users
- Smart Merge - Cascading updates preserve user-defined configurations, only updating the delta
Request Body Fields:
{
"LoadKeys": true, // Reload keys (async)
"LogEnable": true, // System log toggle
"SaveCache": true, // Save cache immediately
"Resources": "/v1/chat/completions, /v1/embeddings", // Resource allowlist
"LevelMapper": "gpt-4*=2, claude*=3", // Level routing mapping
"ModelMapper": "gpt-3.5*=gpt-4o-mini, o*=gpt-4o", // Model mapping (cascades)
"ModelFailover": "gpt-4o=gpt-4o-mini|gpt-4-turbo", // Model failover
"ModelLimits": { // Model rate limits (cascades)
"gpt-4o": {"rpm": 30, "tpm": 90000}
},
"DelKeys": true // Delete all sleeping keys
}
Field Description:
| Field | Type | Scope | Description |
|---|---|---|---|
LoadKeys | boolean | Owner | Reload all keys for this owner account (async background) |
LogEnable | boolean | System | System-level log toggle (affects all Owners) |
SaveCache | boolean | Owner | Persist user cache to Redis immediately |
Resources | string | Owner | Update Owner's resource allowlist |
ModelMapper | string | Cascading | Model mapping rules, auto-syncs to all descendants |
LevelMapper | string | System | Level routing rules (shared by all descendants) |
ModelFailover | string | Owner | Model failover strategy |
ModelLimits | object/string | Cascading | Model-level rate limits, auto-syncs to all descendants |
DelKeys | boolean | Owner | Delete all sleeping keys and restore availability |
ModelMapper Cascading Update Rules:
Supports the following operation syntax:
model1=target- Add/update mapping-model1- Delete mapping-model1, model2=target- Combined operations
Examples:
# 1. Basic configuration update
curl -X POST https://api.xaixapi.com/x-conf \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"ModelMapper": "gpt-3.5*=gpt-4o-mini, o*=gpt-4o",
"LevelMapper": "gpt-4*=2, claude*=3"
}'
# 2. Batch update model rate limits (affects all descendants)
curl -X POST https://api.xaixapi.com/x-conf \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"ModelLimits": {
"gpt-4o": {"rpm": 30, "tpm": 90000},
"claude-3-opus": {"rpm": 20, "tpm": 60000}
}
}'
# 3. Delete model mapping (cascades to all descendants)
curl -X POST https://api.xaixapi.com/x-conf \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"ModelMapper": "-gpt-3.5*"
}'
# 4. Clear all model limits
curl -X POST https://api.xaixapi.com/x-conf \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"ModelLimits": "*"
}'
# 5. Reload keys and save cache
curl -X POST https://api.xaixapi.com/x-conf \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"LoadKeys": true,
"SaveCache": true
}'
# 6. Delete all sleeping keys
curl -X POST https://api.xaixapi.com/x-conf \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{"DelKeys": true}'
Response Example:
{
"LoadKeys": "true",
"ModelMapper": {
"gpt-3.5*": "gpt-4o-mini",
"o*": "gpt-4o"
},
"LevelMapper": {
"gpt-4*": 2,
"claude*": 3
},
"ModelLimits": {
"gpt-4o": {
"rpm": 30,
"tpm": 90000
}
}
}
Configuration Scope:
System-Level Configuration (Shared by all descendants):
- LevelMapper - Level routing rules, all descendant users directly use Owner's configuration without syncing
Cascading Configuration (Auto-propagates to all descendants):
- ModelMapper - Descendant users' ModelMapper inherits new mapping rules
- ModelLimits - Descendant users' ModelLimits inherits new rate limit configurations
Important Notes:
- Cascading updates are additive and won't delete other user-defined mappings
- Delete operations (like
-model) also cascade to remove corresponding mappings from all descendants - Root user updates affect all non-Root users (Gear > 0)
- LevelMapper is a system-level shared configuration that descendant users cannot override
- Configuration updates automatically refresh related failover strategies
2. System Configuration API
Manage owner-level system configuration including model mapping, Level routing, resource allowlist, model rate limits, pricing override, etc.
2.1 Get Configuration
Endpoint: GET /x-config
curl -H "Authorization: Bearer $API_KEY" \
https://api.xaixapi.com/x-config | jq .
Response Example:
{
"success": true,
"oid": 1,
"configs": {
"MODEL_MAPPER": {"gpt-3.5*": "gpt-4o-mini"},
"LEVEL_MAPPER": {"gpt-4*": 2, "claude*": 3},
"SWITCH_OVER": {"1": 2},
"RESOURCES": {"/v1/chat/completions": {}, "/v1/embeddings": {}},
"MODEL_LIMITS": {"gpt-4o": {"rpm": 30, "tpm": 90000}},
"EMAIL_SMTP": "smtp.gmail.com",
"EMAIL_TLS": true,
"PRICING": "{\"ChatPricing\":{\"gpt-4o\":{\"InputText\":3.5}}}"
}
}
2.2 Update Configuration
Endpoint: PUT /x-config or POST /x-config
Content-Type: application/json
Configuration Keys:
| Key | Description | Format | Example |
|---|---|---|---|
MODEL_MAPPER | Model Mapper | source=target | gpt-3.5*=gpt-4o-mini, o*=gpt-4o |
LEVEL_MAPPER | Level Mapper | model=level | gpt-4*=2, claude*=3 |
SWITCH_OVER | Level failover | primary=backup | 1=2, 2=3 |
RESOURCES | Resource Allowlist | comma/space separated | /v1/chat/completions, /v1/embeddings |
MODEL_LIMITS | Model Limits | JSON string/object | See below |
PRICING | Pricing override | JSON string | See 2.3 |
XAI_MAIL | System notification email | string | [email protected] |
EMAIL_SMTP | SMTP server | string | smtp.gmail.com |
EMAIL_PORT | SMTP port | string | 587 |
EMAIL_AUTH | Auth email | string | [email protected] |
EMAIL_PASS | Email password | string | password |
EMAIL_TLS | Enable TLS | string | true |
Example:
curl -X PUT https://api.xaixapi.com/x-config \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"MODEL_MAPPER": "gpt-3.5*=gpt-4o-mini, o*=gpt-4o",
"LEVEL_MAPPER": "gpt-4*=2, claude*=3",
"SWITCH_OVER": "1=2, 2=3",
"RESOURCES": "/v1/chat/completions, /v1/embeddings",
"MODEL_LIMITS": "{\"gpt-4o\": {\"rpm\": 30, \"tpm\": 90000}}",
"EMAIL_SMTP": "smtp.gmail.com",
"EMAIL_TLS": "true"
}'
MODEL_LIMITS Details:
{
"MODEL_LIMITS": {
"gpt-4o": {"rpm": 30, "tpm": 90000},
"claude-3-opus": {"rpm": 20, "tpm": 60000}
}
}
Or as JSON string:
{
"MODEL_LIMITS": "{\"gpt-4o\": {\"rpm\": 30, \"tpm\": 90000}}"
}
Format Notes:
MODEL_MAPPER/LEVEL_MAPPER/SWITCH_OVERusek=vcomma-separated format, auto-trimmedRESOURCESsupports comma/space separation, each item validated as valid pathMODEL_LIMITScan be JSON string or object, object mode supports incremental override
2.3 Pricing Override (PRICING)
Owner accounts can override system default pricing via PRICING config key, only need to specify "delta" different from defaults.
Data Structure:
{
"ChatPricing": {
"<model>": {
"InputText": 0, // USD/1M tokens
"OutputText": 0,
"CachedText": 0,
"CacheWrite": 0,
"ReasonText": 0,
"InputAudio": 0,
"OutputAudio": 0,
"InputImage": 0,
"OutputImage": 0,
"Rates": 1
}
},
"ImgPricing": {
"<model>": {
"Call": 0,
"Rates": 1,
"Sizes": {"1024x1024": 0}
}
},
"AudioPricing": {
"<model>": {
"Input": 0,
"Output": 0,
"Call": 0,
"Rates": 1
}
},
"RerankPricing": {
"<model>": {"Input": 0, "Call": 0, "Rates": 1}
},
"CallPricing": {
"<model>": {"Call": 0, "Rates": 1}
},
"FineTuningPricing": {
"<base-model>": {"InputText": 0, "OutputText": 0, "Rates": 1}
}
}
Field Descriptions:
InputText/OutputText- USD/million tokens (input/output text)CachedText/CacheWrite- USD/million tokens (cache read/write)ReasonText- USD/million tokens (reasoning process, e.g. o1 series)InputAudio/OutputAudio- USD/million equivalent tokens (audio)InputImage/OutputImage- USD/million equivalent tokens (image)Rates- Model-level multiplier (default 1)Call- USD/call (per-call billing)Sizes- Image size pricing (e.g."1024x1024": 0.05)
Example: Minimal Delta Override
my_pricing.json:
{
"ChatPricing": {
"gpt-4o": {
"InputText": 3.5,
"OutputText": 12,
"Rates": 1
}
}
}
Write command:
curl -H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-X PUT https://api.xaixapi.com/x-config \
-d "{\"PRICING\": $(jq -Rs . < my_pricing.json)}"
Read current pricing override:
curl -H "Authorization: Bearer $API_KEY" \
https://api.xaixapi.com/x-config | jq -r '.configs.PRICING'
Validation & Limits:
- JSON size ≤ 128 KB
- Total entries ≤ 1024
- Unknown fields rejected
- All numeric values must be finite and non-negative
Effective Logic:
- Owner override takes precedence
- Uncovered parts fallback to system default (
pricing.json) - Stacks with user-level
Rates/Factormultipliers
2.4 Delete Configuration Items
Endpoint: DELETE /x-config
Revert to system defaults after deletion.
Request Body:
{
"keys": ["MODEL_MAPPER", "PRICING"]
}
Example:
# Clear pricing override, revert to system default
curl -H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-X DELETE https://api.xaixapi.com/x-config \
-d '{"keys": ["PRICING"]}'
3. Broadcast Notifications API
Publish system-wide or targeted user notifications, displayed in Manage console banner.
3.1 Create System News
Endpoint: POST /x-news
Request Body:
{
"title": "System Maintenance Notice",
"content": "System maintenance scheduled for tomorrow 2:00-4:00 AM, service may be interrupted."
}
Field Descriptions:
title- Notification title (required, max 100 chars)content- Notification content (required, max 1000 chars)
Example:
curl -X POST https://api.xaixapi.com/x-news \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"title": "System Maintenance Notice",
"content": "System maintenance scheduled for tomorrow 2:00-4:00 AM."
}'
3.2 Create Targeted News
Endpoint: POST /x-news/{target}
Publish notification to specified user or users under DNA path.
Path Parameter:
{target}- User ID, username, email, or DNA path
Examples:
# Send to specified user
curl -X POST https://api.xaixapi.com/x-news/42 \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"title": "Balance Alert",
"content": "Your account balance is below $10, please recharge."
}'
# Send to all users under DNA path
curl -X POST https://api.xaixapi.com/x-news/.1.42. \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"title": "Team Notification",
"content": "Important notification for your team..."
}'
3.3 Delete News
Endpoint: DELETE /x-news
Request Body:
{
"id": 123
}
Example:
curl -X DELETE https://api.xaixapi.com/x-news \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{"id": 123}'
3.4 Get News List
Endpoint: GET /dashboard/news
See Manage API Reference for details.
Use Cases
Case 1: Add Standard Provider
# 1. Add OpenAI Provider
curl -X POST https://api.xaixapi.com/x-keys \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"SecretKey": "sk-live-...",
"Name": "OpenAI Main Pool",
"Level": 1,
"Provider": "https://api.openai.com",
"Status": true
}'
# 2. System auto-completes LEVEL_MAPPER (gpt* → Level 1)
# 3. Verify configuration
curl -H "Authorization: Bearer $API_KEY" \
https://api.xaixapi.com/x-conf | jq '.LevelMapper'
Case 2: Configure Model Mapping and Rate Limits
# 1. Set model mapping (redirect gpt-3.5 to gpt-4o-mini)
# 2. Set Level mapping (gpt-4* uses Level 2)
# 3. Set model rate limits (gpt-4o limited to 30 RPM)
curl -X PUT https://api.xaixapi.com/x-config \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"MODEL_MAPPER": "gpt-3.5*=gpt-4o-mini, o*=gpt-4o",
"LEVEL_MAPPER": "gpt-4*=2, claude*=3",
"MODEL_LIMITS": "{\"gpt-4o\": {\"rpm\": 30, \"tpm\": 90000}}"
}'
Case 3: Configure Azure OpenAI
curl -X POST https://api.xaixapi.com/x-keys \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"SecretKey": "your-azure-key",
"Name": "Azure GPT-4",
"Level": 2,
"Provider": "https://your-resource.openai.azure.com",
"Status": true,
"Config": {
"provider_type": "azure",
"model_mapping": {
"gpt-4o": "gpt-4-deployment",
"gpt-3.5*": "gpt35-turbo"
},
"api_versions": {
"chat": "2025-01-01-preview"
}
}
}'
Case 4: Custom Pricing and Publish Notification
# 1. Create pricing delta file
cat > my_pricing.json <<EOF
{
"ChatPricing": {
"gpt-4o": {"InputText": 3.5, "OutputText": 12}
}
}
EOF
# 2. Upload pricing override
curl -H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-X PUT https://api.xaixapi.com/x-config \
-d "{\"PRICING\": $(jq -Rs . < my_pricing.json)}"
# 3. Publish system notification
curl -X POST https://api.xaixapi.com/x-news \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"title": "Pricing Update",
"content": "GPT-4o model pricing has been updated, check billing page for details."
}'
Best Practices
Provider Management
- Group by Service - Put keys from same service in same Level
- Set Failover - Use
SWITCH_OVERto configure backup Levels - Check Sleeping Keys - Use
GET /x-confto view sleeping status - Reasonable Levels - Divide Levels by cost, speed, reliability
Configuration Management
- Model Mapping Purpose - Use for smooth migration or cost optimization (e.g. map o1 to gpt-4o)
- Careful with RESOURCES - Wrong config may block API access
- MODEL_LIMITS Control - Set strict rate limits for expensive models
- Regular Backups - Use
GET /x-configto export configuration
Pricing Override
- Delta Principle - Only override parts different from system default
- Version Control - Save pricing JSON to file and track in version control
- Test & Verify - After update, verify billing via
/dashboard/bill - Document Changes - Record override reason and time for auditing
Notification Management
- Priority First - System-wide news for critical information
- Concise Content - Avoid overly long notifications
- Clean Expired - Delete expired or invalid notifications
Related Documentation
- Manage API Reference - User management and dashboard API
- Admin Console Guide - Admin UI instructions
- Authentication - API authentication methods
- Glossary - Level, Model Mapper and other concepts