EUrouter

Completions

Text completion endpoints

Create text completion

POST
/api/v1/completions

Authorization

bearerAuth
AuthorizationBearer <token>

API key in format: eur_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

In: header

Request Body

application/json

model*string

Model identifier (e.g., "openai/gpt-3.5-turbo-instruct")

prompt*string

The prompt to generate completions for

models?array<string>

Fallback model list (not yet implemented)

provider?

Provider routing preferences

reasoning?

Reasoning parameters (for reasoning models)

transforms?array<string>

Prompt transforms to apply

stream?boolean|

Enable streaming responses

max_tokens?integer|

Maximum tokens to generate

temperature?number|

Sampling temperature (0-2)

seed?integer|

Random seed for deterministic generation

top_p?number|

Nucleus sampling probability (0-1)

top_k?integer|

Top-k sampling (0 = disabled)

frequency_penalty?number|

Frequency penalty (-2 to 2)

presence_penalty?number|

Presence penalty (-2 to 2)

repetition_penalty?number|

Repetition penalty multiplier

stop?|

Stop sequences

logit_bias?|

Token logit biases

logprobs?integer|

Number of logprobs to return (0-5)

top_logprobs?integer|

Number of top logprobs to return (0-20)

min_p?number|

Minimum probability threshold (0-1)

top_a?number|

Top-a sampling parameter (0-1)

user?string

End-user identifier for abuse detection

Response Body

application/json

curl -X POST "https://api.eurouter.ai/api/v1/completions" \  -H "Content-Type: application/json" \  -d '{    "model": "string",    "prompt": "string"  }'
{
  "id": "string",
  "object": "text_completion",
  "created": 0,
  "model": "string",
  "system_fingerprint": "string",
  "choices": [
    {
      "text": "string",
      "index": 0,
      "finish_reason": "stop",
      "logprobs": {
        "tokens": [
          "string"
        ],
        "token_logprobs": [
          0
        ],
        "top_logprobs": [
          {
            "property1": 0,
            "property2": 0
          }
        ],
        "text_offset": [
          0
        ]
      }
    }
  ],
  "usage": {
    "prompt_tokens": -9007199254740991,
    "completion_tokens": -9007199254740991,
    "total_tokens": -9007199254740991
  }
}