Access multiple LLM models with one API key

Unified key management, quota control, and multi-model routing—focus on development.

API Usage Example
OpenAI-compatible format
// Call LLM models with your API key
const response = await fetch('https://api.ithu.tw/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    'model': 'gpt-oss-120b',
    'messages': [
      { 'role': 'user', 'content': 'Hello, world!' }
    ]
  })
});
6+
Available Models
500+
Registered Users
99.9%
Uptime
<200ms
Avg. Latency

Key management designed for developers

Simplify API access so you can focus on product development.

Unified key management

Access all models with a single key—simplify deployment.

Secure key control

Quota management, usage tracking, and real-time monitoring.

Multi-model routing

Choose models based on needs—balance speed and cost.

OpenAI-compatible

Standard API format—easily integrate with existing tools.

Fast deployment

Get your key in 60 seconds and start developing immediately.

Usage analytics

Track API calls, token usage, and costs.

Supported integrations
Python SDK Node.js SDK REST API OpenAI SDK GitHub Actions Cloud Functions

Available models

Choose the model that fits your needs and start building.

No matching models found. Try a different keyword.

Start in 60 seconds: get your API key and start developing

Simplify the process so you can focus on creating.