Your browser AI just learned to use tools
Until now, Daneel could read and understand web pages, search entire sites, and chat with your documents. That's powerful — but it stopped at the boundary of your browser tab.
With MCP support, that boundary disappears. Daneel now speaks the Model Context Protocol — an open standard that lets AI models call external tools and services directly. Ask your AI to check a Stripe payment, query a Supabase table, look up a Notion page, or inspect a Vercel deployment — and it will do it, right from the conversation.
No copy-pasting between tabs. No switching to dashboards. Just ask.
Discover, connect, use
We designed the MCP experience around three steps:
Discover. The new MCP settings panel features a curated grid of popular services — Stripe, Notion, Vercel, Supabase, Figma, Linear, Slack, and more — each with its logo, auth type, and a one-click "Connect" button. If you need something beyond the featured list, a search bar queries multiple registries in real time. You can also paste any MCP server URL directly.
Connect. Daneel handles authentication automatically. For public servers, you're connected instantly. For API-key services like Google Maps, you paste your key once and it's stored securely. For OAuth services like Stripe or Notion, Daneel runs the full OAuth2 + PKCE flow through Chrome's identity API — you authorize in a familiar browser popup, and tokens are managed, refreshed, and rotated for you.
Use. Once connected, every tool a server exposes becomes available to your AI. Ask a question that requires external data, and the LLM decides which tools to call, executes them, reads the results, and continues the conversation. Multiple tools, multiple turns, fully autonomous.
Sixteen featured integrations at launch
The MCP settings panel ships with a hand-picked grid of validated servers across categories:
- Payments: Stripe — customers, invoices, subscriptions, payment intents
- Productivity: Notion — pages, databases, search; Fibery — work management
- DevOps: Vercel — deployments, domains, logs; Cloudflare — Workers, KV, R2, D1
- Database: Supabase — tables, edge functions, storage
- Design: Figma — design files, components, tokens
- Project management: Linear — issues, projects, cycles; Atlassian — Jira + Confluence
- Communication: Slack — messages, channels, search
- Search & data: Exa — neural web search; Fiber — company enrichment; Apify — web scraping
- CRM: Salesforce — objects, contacts, opportunities
- Maps: Google Maps — places, geocoding, directions
Each server shows its authentication method, tool count, and status indicator so you know exactly what you're connecting to before you click.
Authentication done right
MCP servers use different auth schemes, and Daneel handles all of them:
- Public — no credentials needed, connect instantly
- API Key — paste once, stored in Chrome's encrypted storage, injected automatically on every request
- OAuth2 + PKCE — full authorization code flow with proof key, token refresh, and expiry tracking
- Bearer token — for services that issue long-lived tokens
Credentials are stored per-server in Chrome storage, excluded from data exports, and never leave your browser. Status indicators on each registered server tell you at a glance whether authentication is healthy, expiring, or needs attention.
Multi-turn tool calling
When you ask a question that requires external data, Daneel doesn't just make one tool call and stop. The tool-calling loop orchestrates a full multi-turn conversation between the LLM and the connected services:
- The LLM analyzes your question and decides which tools to call
- Daneel executes the tool calls against the MCP servers
- Results flow back to the LLM, which can ask follow-up questions or call additional tools
- The loop continues (up to five turns) until the LLM has enough information to answer
This means you can ask compound questions — "Find my latest Stripe invoice over $500 and check if the customer has an active Supabase project" — and the AI will chain the right calls together.
Works with every AI backend
MCP tool calling isn't limited to cloud APIs. We built three tool-calling strategies so every backend can participate:
- Claude uses native tool_use blocks — the most reliable path, with structured input/output
- Ollama uses OpenAI-compatible function calling — works with tool-capable models like Qwen and Llama
- WebGPU and Gemini Nano use a prompt-based strategy — tools are injected into the system prompt and the model outputs XML tool calls
Cloud or local, the same MCP servers work everywhere.
Scoped to vaults and agents
MCP servers aren't just global. You can attach specific servers to individual vaults or agents, creating focused tool environments:
- A vault for your finance docs can have Stripe attached — ask questions that combine your local documents with live payment data
- An agent persona for DevOps can bind Vercel and Cloudflare — it knows its role and has the right tools
Vaults and agents enforce mutual exclusion: a vault uses either its own MCP servers or an agent's, keeping tool access predictable and auditable.
You stay in control
Every registered server can be toggled on or off without removing it. Tool manifests are cached locally and refreshable on demand. The settings panel shows exactly which tools each server exposes, with parameter signatures you can inspect before connecting.
Servers on your local network — localhost, private IPs, .local domains — are automatically detected and flagged, so you always know whether a tool call stays on your machine or reaches an external service.
What's next
This is the foundation. We're working on deeper integrations — agentic RAG that combines local document search with external tool calls, Docker Companion for running MCP servers locally, and more validated servers in the featured grid.
MCP turns Daneel from a reader into a doer. Connect your first server and see what your browser AI can do.