Opencode
Configure Opencode to route LLM requests through naxxen.
Opencode is a multi-provider coding agent with configurable provider endpoints. You can route requests through naxxen by setting custom base URLs in Opencode's config.
Configuration
Edit your Opencode config file (usually ~/.config/opencode/config.json or opencode.json in your project root).
Set the provider base URL to your naxxen endpoint with your key in the path:
OpenAI provider
{
"providers": {
"openai": {
"apiKey": "sk-proj-YOUR_OPENAI_KEY",
"baseUrl": "https://api.naxxen.ai/nxn-sk-YOUR_NAXXEN_KEY"
}
}
}Anthropic provider
{
"providers": {
"anthropic": {
"apiKey": "sk-ant-YOUR_ANTHROPIC_KEY",
"baseUrl": "https://api.naxxen.ai/nxn-sk-YOUR_NAXXEN_KEY"
}
}
}Via environment variables
If Opencode reads base URLs from environment variables:
export OPENAI_BASE_URL="https://api.naxxen.ai/nxn-sk-YOUR_NAXXEN_KEY"
export ANTHROPIC_BASE_URL="https://api.naxxen.ai/nxn-sk-YOUR_NAXXEN_KEY"Check Opencode's documentation for the exact environment variable names it supports.
How it works
The setup is identical to other integrations:
- Opencode sends requests to
https://api.naxxen.ai/nxn-sk-YOUR_KEY/... - naxxen strips the key, compresses the prompt, forwards to the real provider
- The response streams back through naxxen to Opencode
Your provider API key is sent in standard headers and forwarded unchanged.
Verify
Make a request through Opencode and check your dashboard for compression stats.
Set up with an AI agent
Copy this prompt and give it to your AI coding agent:
Read https://docs.naxxen.ai/llms.txt and configure Opencode to use
naxxen for prompt compression. My naxxen key is nxn-sk-YOUR_KEY.
Set the provider base URLs in the Opencode config.