OpenRouter is a unified API gateway that provides access to multiple AI models from different providers like OpenAI, Anthropic, Google, Meta, and many others. Instead of managing multiple API keys and different API formats, you can use OpenRouter to access all these models through a single, standardized interface.
Why Use OpenRouter?
- Single API for Multiple Models: Access GPT-4, Claude, Gemini, Llama, and more through one OpenAI-compatible API
- No Rate Limits: Bypass the rate limits imposed by individual providers, perfect for high-volume applications
- Transparent Pricing: Compare real-time costs across models and choose the best option for your use case
- Fallback Support: Automatically switch to alternative models if your primary choice is unavailable
Getting Started
- Sign up at openrouter.ai
- Create an API key at openrouter.ai/keys
- Add credits at openrouter.ai/credits (minimum $5)
Official SDKs
OpenRouter provides official SDKs for easy integration:
- TypeScript SDK:
npm install @openrouter/sdk- Type-safe access to 300+ models - Python SDK: Full documentation at openrouter.ai/docs/sdks/python
- Vercel AI SDK:
npm install @openrouter/ai-sdk-provider- For Vercel AI SDK users - OpenAI SDK Compatible: Works with existing OpenAI SDKs by changing the base URL
Using OpenRouter with JavaScript/TypeScript
Using the Official SDK:
import OpenRouter from "@openrouter/sdk";
const openrouter = new OpenRouter({
apiKey: process.env.OPENROUTER_API_KEY,
});
const completion = await openrouter.chat.completions.create({
model: "anthropic/claude-3.5-sonnet",
messages: [
{ role: "user", content: "Explain quantum computing in simple terms" }
],
});
console.log(completion.choices[0].message.content);Using fetch (alternative method):
async function chat(message) {
const response = await fetch("https://openrouter.ai/api/v1/chat/completions", {
method: "POST",
headers: {
"Authorization": `Bearer ${process.env.OPENROUTER_API_KEY}`,
"Content-Type": "application/json"
},
body: JSON.stringify({
"model": "anthropic/claude-3.5-sonnet",
"messages": [{ "role": "user", "content": message }]
})
});
const data = await response.json();
return data.choices[0].message.content;
}Using OpenRouter with Python
Using the Official SDK:
pip install openrouterfrom openrouter import OpenRouter
import os
with OpenRouter(api_key=os.getenv("OPENROUTER_API_KEY")) as client:
response = client.chat.send(
model="anthropic/claude-3.5-sonnet",
messages=[
{"role": "user", "content": "Write a haiku about programming"}
]
)
print(response.choices[0].message.content)The official SDK is auto-generated from OpenRouter's OpenAPI specs, provides full type safety with Pydantic validation, and supports streaming and async operations.
Using OpenAI SDK (alternative):
from openai import OpenAI
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="your-openrouter-api-key"
)
response = client.chat.completions.create(
model="anthropic/claude-3.5-sonnet",
messages=[{"role": "user", "content": "Write a haiku about programming"}]
)
print(response.choices[0].message.content)Available Models
OpenRouter provides access to many models. Here are some popular ones:
- OpenAI:
openai/gpt-4,openai/gpt-3.5-turbo - Anthropic:
anthropic/claude-opus-4,anthropic/claude-sonnet-4.5 - Google:
google/gemini-3-pro-preview,google/gemini-3-flash-preview - Meta:
meta-llama/llama-4-scout,meta-llama/llama-4-maverick - Mistral:
mistralai/mistral-large,mistralai/mistral-medium
You can find the complete list of available models at https://openrouter.ai/models.
Model Fallbacks
OpenRouter automatically falls back to alternative models if your primary choice is unavailable. This works with both the SDK and raw API:
Using the SDK:
import OpenRouter from "@openrouter/sdk";
const openrouter = new OpenRouter({
apiKey: process.env.OPENROUTER_API_KEY,
});
const completion = await openrouter.chat.completions.create({
models: [
"anthropic/claude-opus-4",
"openai/gpt-4",
"google/gemini-3-pro-preview"
],
route: "fallback",
messages: [
{ role: "user", content: "Hello!" }
],
});Using fetch:
const response = await fetch("https://openrouter.ai/api/v1/chat/completions", {
method: "POST",
headers: {
"Authorization": `Bearer ${process.env.OPENROUTER_API_KEY}`,
"Content-Type": "application/json"
},
body: JSON.stringify({
"models": [
"anthropic/claude-opus-4",
"openai/gpt-4",
"google/gemini-3-pro-preview"
],
"route": "fallback",
"messages": [{ "role": "user", "content": "Hello!" }]
})
});If the first model returns an error (rate limits, downtime, etc.), OpenRouter automatically tries the next one.
Best Practices
- Store API Keys Securely: Always use environment variables, never hardcode API keys
- Monitor Usage: Check your OpenRouter dashboard regularly to track spending
- Handle Errors: Implement proper error handling for API failures
- Set Timeouts: Add reasonable timeouts to prevent hanging requests
- Use Appropriate Models: Choose models based on your use case (speed vs. quality vs. cost)
Pricing
OpenRouter uses a pay-as-you-go model with transparent pricing. Each model has different costs per token. Generally:
- Smaller models (like Llama 3 8B) are very cheap
- Medium models (like GPT-3.5 Turbo) are moderate
- Large models (like GPT-4 or Claude Opus) are more expensive
You can see real-time pricing for each model at https://openrouter.ai/models.
Conclusion
OpenRouter is a practical solution for working with multiple AI models without the overhead of managing separate integrations. The examples above should get you started, but there's much more to explore including custom parameters, fine-tuned models, and advanced routing strategies.
Check out the OpenRouter documentation for more details.