Skip to content
Developer Codex
AI Agent

Configure LLM Providers

Pro+Agent Available on the Pro+Agent tier.

The Inklings agent is model-agnostic. You bring your own key (BYOK) or connect through OpenRouter or a local Ollama instance. This guide walks through each path.

Go to Settings → Agent to see the provider configuration. On first open without a prior configuration, the guided setup flow presents two starting paths: Local Model (Free) and a cloud provider option. Choose the path that fits your setup.

If you’ve already configured a provider, the settings form shows your current provider, model, and API key status.

Configure a cloud provider (Anthropic, OpenAI, xAI)

Section titled “Configure a cloud provider (Anthropic, OpenAI, xAI)”
  1. Select Anthropic from the provider dropdown.
  2. Click Configure next to the API Key field.
  3. Enter your Anthropic API key.
  4. Click Save. The key is validated against Anthropic’s API before being stored.
  5. Select a model from the model dropdown (default: claude-sonnet-4-6).

Your key is stored in the OS keychain — never in the settings file. The settings file records only a boolean flag indicating that a key is configured.

OpenRouter provides access to 100+ models through a single OAuth connection — no manual key entry required.

  1. Select OpenRouter from the provider dropdown.
  2. Click Connect via OpenRouter.
  3. Your system browser opens to an OpenRouter authorization page. Sign in and approve access.
  4. The browser redirects back to Inklings automatically. The settings panel shows your connection as active.

To cancel a pending connection, click Cancel before completing authorization in the browser. This clears the pending state without storing any credentials.

Ollama runs models entirely on your machine — no cloud, no API key, no usage fees.

  1. Install Ollama from ollama.com if you haven’t already.
  2. Start Ollama and confirm it is running.
  3. In Inklings, select Ollama (Local) from the provider dropdown.
  4. The status section shows a green dot and “Running (vX.Y.Z)” when Ollama is detected on the default endpoint (http://localhost:11434).
  5. The model picker shows recommended models filtered by your hardware tier. Models with an “Installed” badge are ready to use. Models without one show a Download button.
  6. Click Download on any model to start a streaming download. A progress bar shows download percentage. When complete, the model automatically becomes the active selection.

If Ollama is running on a different machine or port:

  1. Click Advanced in the Ollama settings section.
  2. Update the Endpoint field with your custom URL (e.g., http://192.168.1.100:11434).
  3. Tab out of the field — settings save automatically on blur.
  4. Click Test Connection to verify the custom endpoint is reachable.
ActionSteps
View key statusSettings → Agent — the key row shows “Configured” or “Not configured”
Replace a keyClick Configure, enter the new key, click Save
Remove a keyClick Remove (trash icon) on the key row, confirm if prompted

After removing a key, api_key_configured returns to false in settings and the agent falls back to the stub provider. Sending a message shows “Error: LLM provider not configured.”

If key validation fails, the key is not stored and an error message explains why:

ErrorMeaning
Invalid API keyThe key was rejected by the provider. Check for typos.
Rate limited by providerThe provider is rate-limiting validation attempts. Wait and retry.
Network error: connection refusedThe device is offline or the provider endpoint is unreachable.

Changing the provider dropdown immediately updates the model selector to show that provider’s models. Changing the model updates only agent.model — the provider and API key are unaffected.

Settings persist across app restarts. Provider, model, and scheduled activity interval are all written to disk atomically when saved.

Was this page helpful?