Invoke Providers and Proxies via SDKs¶
Once you have deployed an LLM Provider or LLM Proxy in the AI Workspace, you can invoke it using any supported AI SDK by pointing it at the gateway's Invoke URL and authenticating with your generated API key.
The examples below apply to both providers and proxies — the only difference between the two is the Invoke URL you supply.
Prerequisites¶
- An LLM Provider or Proxy deployed to a gateway
- The Invoke URL for the deployed endpoint
- A generated API key
Authentication¶
All requests to the gateway must include your API key in the location configured in the Security tab of your provider or proxy. By default this is the X-API-Key request header, and the code examples below use that default.
Note
Depending on the SDK or provider you use, you can choose the key name and location that works best and configure it in the Security tab. See Configure Inbound Authentication. The examples below use the default X-API-Key header.
OpenAI¶
Invoke URL format
Append /v1 to the Invoke URL shown in the console:
Install: pip install openai
Basic chat completion:
from openai import OpenAI
INVOKE_URL = "https://<gateway-host>/<context>/v1"
API_KEY = "<your-gateway-api-key>"
client = OpenAI(
api_key=API_KEY,
base_url=INVOKE_URL,
default_headers={"X-API-Key": API_KEY},
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What is WSO2?"}],
)
print(response.choices[0].message.content)
Streaming:
Install: pip install langchain-openai
Basic invoke:
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
INVOKE_URL = "https://<gateway-host>/<context>/v1"
API_KEY = "<your-gateway-api-key>"
llm = ChatOpenAI(
model="gpt-4o",
api_key=API_KEY,
base_url=INVOKE_URL,
default_headers={"X-API-Key": API_KEY},
)
response = llm.invoke([HumanMessage(content="What is WSO2?")])
print(response.content)
Streaming:
Anthropic¶
Install: pip install anthropic
Note
The Anthropic SDK sends the api_key parameter as the x-api-key header automatically. No additional header configuration is needed.
Basic message:
import anthropic
INVOKE_URL = "https://<gateway-host>/<context>"
API_KEY = "<your-gateway-api-key>"
client = anthropic.Anthropic(
api_key=API_KEY,
base_url=INVOKE_URL,
)
response = client.messages.create(
model="claude-sonnet-4-5",
max_tokens=1024,
messages=[{"role": "user", "content": "What is WSO2?"}],
)
print(response.content[0].text)
Streaming:
Install: pip install langchain-anthropic
Basic invoke:
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage
INVOKE_URL = "https://<gateway-host>/<context>"
API_KEY = "<your-gateway-api-key>"
llm = ChatAnthropic(
model="claude-sonnet-4-5",
api_key=API_KEY,
anthropic_api_url=INVOKE_URL,
default_headers={"X-API-Key": API_KEY},
max_tokens=1024,
)
response = llm.invoke([HumanMessage(content="What is WSO2?")])
print(response.content)
Streaming:
Gemini¶
Install: pip install google-genai
Note
The Gemini SDK normally sends its key as x-goog-api-key, which the gateway does not use for authentication. Pass api_key="placeholder" to satisfy the SDK and supply the real gateway key via X-API-Key in HttpOptions.
Basic content generation:
from google import genai
from google.genai import types as genai_types
INVOKE_URL = "https://<gateway-host>/<context>"
API_KEY = "<your-gateway-api-key>"
http_options = genai_types.HttpOptions(
base_url=INVOKE_URL,
headers={"X-API-Key": API_KEY},
)
client = genai.Client(api_key="placeholder", http_options=http_options)
response = client.models.generate_content(
model="gemini-2.5-flash",
contents="What is WSO2?",
)
print(response.text)
Streaming:
Install: pip install langchain-google-genai
Basic invoke:
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.messages import HumanMessage
INVOKE_URL = "https://<gateway-host>/<context>"
API_KEY = "<your-gateway-api-key>"
llm = ChatGoogleGenerativeAI(
model="gemini-2.5-flash",
google_api_key=API_KEY,
client_options={"api_endpoint": INVOKE_URL},
additional_headers={"X-API-Key": API_KEY},
)
response = llm.invoke([HumanMessage(content="What is WSO2?")])
print(response.content)
Streaming:
Mistral AI¶
Mistral exposes both a native SDK and an OpenAI-compatible API at /v1.
Install: pip install mistralai httpx
Note
The Mistral SDK sends its API key as a Bearer token. Since the gateway requires X-API-Key, an httpx event hook injects this header on every outgoing request.
Basic chat completion:
import httpx
from mistralai import Mistral
INVOKE_URL = "https://<gateway-host>/<context>"
API_KEY = "<your-gateway-api-key>"
def _inject_api_key(request):
request.headers["X-API-Key"] = API_KEY
http_client = httpx.Client(
event_hooks={"request": [_inject_api_key]},
)
client = Mistral(
api_key=API_KEY,
server_url=INVOKE_URL,
client=http_client,
)
response = client.chat.complete(
model="mistral-small-latest",
messages=[{"role": "user", "content": "What is WSO2?"}],
)
print(response.choices[0].message.content)
Streaming:
Install: pip install openai
Mistral's API is OpenAI-compatible. Append /v1 to the Invoke URL.
Basic chat completion:
from openai import OpenAI
INVOKE_URL = "https://<gateway-host>/<context>/v1"
API_KEY = "<your-gateway-api-key>"
client = OpenAI(
api_key=API_KEY,
base_url=INVOKE_URL,
default_headers={"X-API-Key": API_KEY},
)
response = client.chat.completions.create(
model="mistral-small-latest",
messages=[{"role": "user", "content": "What is WSO2?"}],
)
print(response.choices[0].message.content)
Streaming:
Install: pip install langchain-openai
LangChain's ChatOpenAI works with Mistral's OpenAI-compatible endpoint. Append /v1 to the Invoke URL.
Basic invoke:
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
INVOKE_URL = "https://<gateway-host>/<context>/v1"
API_KEY = "<your-gateway-api-key>"
llm = ChatOpenAI(
model="mistral-small-latest",
api_key=API_KEY,
base_url=INVOKE_URL,
default_headers={"X-API-Key": API_KEY},
)
response = llm.invoke([HumanMessage(content="What is WSO2?")])
print(response.content)
Streaming:
Azure OpenAI¶
Note
The model / azure_deployment parameter must be your Azure deployment name, not the underlying model name.
Install: pip install openai
Basic chat completion:
from openai import AzureOpenAI
INVOKE_URL = "https://<gateway-host>/<context>"
API_KEY = "<your-gateway-api-key>"
client = AzureOpenAI(
api_key=API_KEY,
azure_endpoint=INVOKE_URL,
api_version="2024-10-21",
default_headers={"X-API-Key": API_KEY},
)
response = client.chat.completions.create(
model="<your-deployment-name>",
messages=[{"role": "user", "content": "What is WSO2?"}],
)
print(response.choices[0].message.content)
Streaming:
Install: pip install langchain-openai
Basic invoke:
from langchain_openai import AzureChatOpenAI
from langchain_core.messages import HumanMessage
INVOKE_URL = "https://<gateway-host>/<context>"
API_KEY = "<your-gateway-api-key>"
llm = AzureChatOpenAI(
azure_deployment="<your-deployment-name>",
api_version="2024-10-21",
azure_endpoint=INVOKE_URL,
api_key=API_KEY,
default_headers={"X-API-Key": API_KEY},
)
response = llm.invoke([HumanMessage(content="What is WSO2?")])
print(response.content)
Streaming:
Azure AI Foundry¶
Note
The model / azure_deployment parameter must be your Azure deployment name.
Install: pip install openai
Basic chat completion:
from openai import AzureOpenAI
INVOKE_URL = "https://<gateway-host>/<context>"
API_KEY = "<your-gateway-api-key>"
client = AzureOpenAI(
api_key=API_KEY,
azure_endpoint=INVOKE_URL,
api_version="2024-05-01-preview",
default_headers={"X-API-Key": API_KEY},
)
response = client.chat.completions.create(
model="<your-deployment-name>",
messages=[{"role": "user", "content": "What is WSO2?"}],
)
print(response.choices[0].message.content)
Streaming:
Install: pip install langchain-openai
Basic invoke:
from langchain_openai import AzureChatOpenAI
from langchain_core.messages import HumanMessage
INVOKE_URL = "https://<gateway-host>/<context>"
API_KEY = "<your-gateway-api-key>"
llm = AzureChatOpenAI(
azure_deployment="<your-deployment-name>",
api_version="2024-05-01-preview",
azure_endpoint=INVOKE_URL,
api_key=API_KEY,
default_headers={"X-API-Key": API_KEY},
)
response = llm.invoke([HumanMessage(content="What is WSO2?")])
print(response.content)
Streaming: