Local-first doesn't mean local-only

Daneel is built around a simple principle: your data should stay on your machine. WebGPU, Ollama, and Gemini Nano all honor that promise — inference runs locally, nothing leaves your device.

But local models have limits. A 3B model running on your GPU can summarize a page and answer straightforward questions. It can't reliably reason through a complex legal document, orchestrate a five-step tool chain across Stripe and Supabase, or analyze a 50-page PDF with nuance.

Sometimes you need a frontier model. That's what Claude is for.

Bring your own API key

Open Settings, go to the Claude tab, paste your Anthropic API key, and click Save & Test. Daneel encrypts the key with AES-256-GCM and stores it in Chrome's local storage. It never leaves your device except in the API request header itself.

Three models are available:

  • Haiku 4.5 — fast and affordable, great for quick tasks
  • Sonnet 4.6 — the balanced choice for most workloads
  • Opus 4.6 — maximum capability for the hardest problems

Each model shows its per-token pricing directly in the settings panel, so you know exactly what you're paying before you start.

What Claude unlocks

With Claude as your active provider, every Daneel feature gets an upgrade:

  • Page Q&A handles long, complex pages with deep reasoning
  • Site RAG produces more nuanced answers from search results
  • Vault conversations can work through dense documents with multi-turn follow-ups
  • MCP tool calling uses Claude's native tool_use format — the most reliable tool calling in Daneel, capable of multi-step chains across multiple servers
  • Agents become dramatically more capable with a frontier model behind them

After each response, the chat shows token counts and estimated cost — so you always know what a conversation is costing.

The privacy trade-off, stated clearly

Claude is a cloud API. Your prompts leave your machine and reach Anthropic's servers. In Daneel's privacy model, Claude gets "Third-party cloud" residency — the lowest privacy tier.

We don't hide this. The settings panel states it plainly: your data leaves your device when you use the Anthropic API. The model weights are proprietary and not auditable.

This is a conscious choice, not a default. Daneel starts local. Claude is there for when local isn't enough — a complex analysis, a critical tool chain, a document that demands the best model available. Use it deliberately, for the tasks that justify it, and keep your default provider local.

The right tool for the right job

Daneel's multi-provider architecture means you're never locked in. Use WebGPU for everyday browsing. Switch to Ollama for heavier local tasks. Reach for Claude when the stakes or complexity demand it. One click switches providers — your conversations, vaults, and agents stay the same.

Local-first, cloud-when-needed. That's the balance.