EUrouter

Models

Understanding AI models available through EUrouter.


What are models?

Models are large language models (LLMs) — AI systems trained on vast amounts of text data to understand and generate human-like responses. When you send a prompt to EUrouter, a model processes your input and generates a completion.

Different models excel at different tasks. Some are optimized for speed, others for complex reasoning. Some can process images, others specialize in code generation. EUrouter gives you access to the best available models — GPT-5, Claude, Mistral, Llama, and more — all hosted on EU infrastructure.

Browse all available models on the Models page.


Model identifiers

Each model has a unique identifier that you use in API requests:

curl https://api.eurouter.ai/api/v1/chat/completions \
  -H "Authorization: Bearer $EUROUTER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5.1",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

EUrouter handles routing to the appropriate EU-based provider automatically.


Capabilities

Models support different capabilities depending on their architecture and training.

Vision

Vision-enabled models can process images alongside text. Send images in your messages and the model can describe, analyze, or answer questions about them.

Tool calling

Models with tool calling (also known as function calling) can invoke external functions you define. This lets models interact with APIs, databases, or other systems.

Streaming

Streaming returns tokens as they're generated instead of waiting for the full response. This creates a more responsive experience for chat applications.

JSON output

Some models can reliably output structured JSON, making it easier to parse responses programmatically.

Reasoning

Reasoning models (like o1 or Claude with extended thinking) spend more time "thinking" before responding. They excel at complex problems that require step-by-step logic.


Pricing

Model pricing is based on token usage:

  • Input tokens — The tokens in your prompt
  • Output tokens — The tokens in the model's response (typically more expensive)

Prices are in EUR credits. More capable models generally cost more per token. Check the Models page for current pricing.


Context length

Each model has a maximum context length — the total number of tokens it can process in a single request (input + output combined).

Larger context windows let you include more information in your prompts, but may come at higher cost.


Choosing a model

Consider these factors:

  • Task complexity — More capable models for complex reasoning
  • Speed — Smaller models respond faster
  • Cost — Balance capability with your budget
  • Features — Check required capabilities (vision, tools, etc.)

Browse models

See all available models, capabilities, and pricing on the Models page.


On this page