na{xx}en

Models

Tested models and provider compatibility.

naxxen works at the API level — it compresses the text content of your request regardless of which model processes it. The models below are the ones we actively test against in our CI pipeline.

Tested models

ProviderModelStatus
OpenAIgpt-4o-miniTested
OpenAIgpt-4oTested
OpenAIgpt-5.4-miniTested
OpenAIgpt-5.4Tested
Anthropicclaude-haiku-4-5Tested
Anthropicclaude-sonnet-4-6Tested
Anthropicclaude-opus-4-6Tested
Googlegemini-2.5-flashTested
Googlegemini-2.5-flash-liteTested
Googlegemini-2.5-proTested

Other models

Any model from these three providers that uses the same API endpoints should work. The models above are just the ones we run automated smoke tests against after every deploy.

If you use a model not on this list (e.g., o3-mini, claude-3.5-sonnet, gemini-2.0-flash), it will still be routed correctly as long as the request format matches the provider's standard API.

Model detection

naxxen detects the provider primarily from the request path and headers, not the model name. Model-name-based detection is a fallback:

  • gpt-*, o3*, o4*, chatgpt-* → OpenAI
  • claude-* → Anthropic
  • Model in URL path (e.g., /models/gemini-2.5-flash:generateContent) → Google

This means custom/fine-tuned model names work as long as the request path or headers identify the provider.