Home / Docs / LLM Configuration

LLM Configuration

AaaS supports multiple LLM providers. You can switch providers or models at any time without changing your skill, data, or any other files.

Supported providers

AnthropicClaude Opus, Sonnet, HaikuStrong reasoning, reliable tool use
OpenAIGPT-4o, GPT-4 Turbo, o1, o3Wide compatibility
GoogleGemini 2.5 Pro, FlashFast responses, good multilingual support
OllamaLlama, Mistral, Phi, QwenFree, runs entirely on your hardware
OpenRouterAll of the above and moreSingle API key, access to many models
AzureGPT-4o, GPT-4 TurboEnterprise compliance, data residency

Configuration via CLI

# Set provider and key in one command
aaas config --provider anthropic --key sk-ant-api03-...

# Set model separately
aaas config --model claude-sonnet-4-20250514

# View current settings
aaas config --show

# Remove a provider's credentials
aaas config --remove anthropic

You can also configure everything visually in the dashboard Settings page.

Environment variables

Environment variables take priority over stored keys:

export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...
export GOOGLE_API_KEY=AIza...

API key storage

Keys configured via CLI or dashboard are stored in ~/.aaas/credentials.json (your home directory, shared across all workspaces). Environment variables always take priority over stored keys.

Choosing a model

  • Production services: Claude Sonnet 4 or GPT-4o. Good balance of quality and speed.
  • Complex reasoning: Claude Opus 4 or o1. Better at multi-step planning and nuanced decisions.
  • High-volume / low-cost: Claude Haiku 4.5 or GPT-4o Mini. Fast and affordable for high-traffic agents.
  • Privacy: Ollama with a local model. Nothing leaves your machine.