Skip to Content
API ReferenceList Models

List Models

List all available AI models with their pricing information. No payment required.

Endpoint

GET /v1/models

Request

No parameters required.

Response

Success (200 OK)

{ total: number; models: Array<{ id: string; // Model identifier name: string; // Human-readable model name description: string; // Model description context_length: number; // Maximum context window in tokens pricing: { input_per_1k_tokens: number; // Cost per 1K input tokens (USD) output_per_1k_tokens: number; // Cost per 1K output tokens (USD) blended_per_1k_tokens: number; // Average cost per 1K tokens (USD) } }> }

Examples

Basic Request

curl https://api.x-router.ai/v1/models

With JavaScript

const response = await fetch('https://api.x-router.ai/v1/models'); const data = await response.json(); console.log(`Total models: ${data.total}`); console.log('Available models:'); data.models.forEach(model => { console.log(`- ${model.id}: ${model.name}`); });

Find Cheapest Model

const response = await fetch('https://api.x-router.ai/v1/models'); const data = await response.json(); const cheapest = data.models.reduce((min, model) => model.pricing.blended_per_1k_tokens < min.pricing.blended_per_1k_tokens ? model : min ); console.log(`Cheapest model: ${cheapest.id}`); console.log(`Cost: $${cheapest.pricing.blended_per_1k_tokens} per 1K tokens`);

Filter by Provider

const response = await fetch('https://api.x-router.ai/v1/models'); const data = await response.json(); // Get all Anthropic models const anthropicModels = data.models.filter(m => m.id.startsWith('anthropic/')); console.log('Anthropic models:'); anthropicModels.forEach(model => { console.log(`- ${model.name}: $${model.pricing.blended_per_1k_tokens}/1K tokens`); });

Compare Pricing

const response = await fetch('https://api.x-router.ai/v1/models'); const data = await response.json(); const models = ['anthropic/claude-3.5-sonnet', 'openai/gpt-4o', 'google/gemini-pro']; console.log('Pricing comparison:'); models.forEach(modelId => { const model = data.models.find(m => m.id === modelId); if (model) { console.log(`${model.name}:`); console.log(` Input: $${model.pricing.input_per_1k_tokens}/1K`); console.log(` Output: $${model.pricing.output_per_1k_tokens}/1K`); console.log(` Blended: $${model.pricing.blended_per_1k_tokens}/1K`); } });

Example Response

{ "total": 295, "models": [ { "id": "anthropic/claude-3.5-sonnet", "name": "Claude 3.5 Sonnet", "description": "Anthropic's most intelligent model", "context_length": 200000, "pricing": { "input_per_1k_tokens": 0.003, "output_per_1k_tokens": 0.015, "blended_per_1k_tokens": 0.009 } }, { "id": "openai/gpt-4o", "name": "GPT-4o", "description": "OpenAI's high-intelligence flagship model", "context_length": 128000, "pricing": { "input_per_1k_tokens": 0.0025, "output_per_1k_tokens": 0.01, "blended_per_1k_tokens": 0.00625 } }, { "id": "google/gemini-pro", "name": "Gemini Pro", "description": "Google's advanced AI model", "context_length": 32000, "pricing": { "input_per_1k_tokens": 0.000125, "output_per_1k_tokens": 0.000375, "blended_per_1k_tokens": 0.00025 } } ] }

Model Providers

X-Router provides access to models from many providers including:

  • Anthropic: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
  • OpenAI: GPT-4o, GPT-4 Turbo, GPT-3.5 Turbo
  • Google: Gemini Pro, Gemini Ultra
  • Meta: Llama 3, Llama 2
  • Mistral: Mistral Large, Mistral Medium
  • Cohere: Command R+, Command
  • And many more…

The model list is updated regularly. Use this endpoint to get the current list.

Using Models

To use a model, pass its id in the model parameter of the Chat Completions endpoint:

const response = await fetchWithPayment('https://api.x-router.ai/v1/chat/completions', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ messages: [{ role: 'user', content: 'Hello!' }], model: 'anthropic/claude-3.5-sonnet', // Use model ID from /v1/models max_tokens: 100 }) });

Pricing Information

The pricing object contains three fields:

  • input_per_1k_tokens: Cost per 1,000 input tokens (your messages)
  • output_per_1k_tokens: Cost per 1,000 output tokens (AI response)
  • blended_per_1k_tokens: Average cost assuming 50/50 input/output split

Use these values to estimate costs for your use case. For exact cost estimates, use the Estimate endpoint.

Last updated on