What is LangChain?
LangChain is a framework for building applications powered by language models. It gives you building blocks for chains (sequences of LLM calls), agents (LLMs that decide which tools to use), retrieval-augmented generation, and memory. It's one of the most widely used AI frameworks, with both Python and JavaScript versions.
Under the hood, LangChain's OpenAI integration uses the standard OpenAI client. That means you can point it at EUrouter by setting the base_url parameter on ChatOpenAI. Your chains, agents, retrievers, and tools all keep working. The only difference is that every LLM call now routes through European servers.
Quick to integrate
A few lines of code is all it takes. Swap your base URL and you are routed through EUrouter.
1from langchain_openai import ChatOpenAI
2
3llm = ChatOpenAI(
4 model="gpt-4o",
5 api_key="eur-...",
6 base_url="https://api.eurouter.ai/v1",
7)
8
9response = llm.invoke("Explain GDPR Article 44")
10print(response.content)Get started in minutes
Follow these steps to connect your application to EUrouter.
Install LangChain
Install the LangChain OpenAI integration package.
1pip install langchain-openaiSet the base URL
Point the ChatOpenAI client to EUrouter by setting base_url.
1from langchain_openai import ChatOpenAI
2
3llm = ChatOpenAI(
4 model="gpt-4o",
5 api_key="eur-...",
6 base_url="https://api.eurouter.ai/v1",
7)Run your chain
Use LangChain as usual. Requests automatically route through EU infrastructure.
1response = llm.invoke("Explain GDPR Article 44")
2print(response.content)Why use LangChain with EUrouter
Drop-in replacement
LangChain's ChatOpenAI class wraps the standard OpenAI client. Change base_url and api_key, and everything else stays the same. Your chain definitions, agent configs, tool integrations, output parsers, none of that needs to change.
- Chains, agents, retrievers, and output parsers all work without modification
- Streaming, function calling, and tool use are fully supported
- Works with ChatOpenAI and any OpenAI-compatible LLM class
- LangGraph workflows route through EUrouter the same way
Multi-model agents
LangChain makes it easy to assign different models to different parts of your pipeline. With EUrouter, you can use GPT-4o for reasoning, Claude for writing, and Mistral for quick classification, all through a single API key and all routed through Europe.
- Assign different models per chain step or agent role
- No separate provider configuration or API key management
- EUrouter's smart routing can also pick the optimal model automatically
- Compare model performance across your pipeline from one dashboard
EU-compliant RAG pipelines
If you're building RAG with LangChain, your LLM calls are often the compliance gap. Your documents and vector store might be hosted in Europe, but the inference call goes to the US. EUrouter closes that gap by keeping LLM and embedding requests within EU infrastructure.
- LLM calls for generation and summarization route through EU servers
- Embeddings API is also available through EUrouter for document indexing
- Works with any LangChain retriever (Chroma, Pinecone, pgvector, etc.)
- Keeps your full RAG pipeline within European jurisdiction