AI in Inklings
AI in Inklings operates on a clear principle: your workspace, your data, your keys. There are no background AI features running without your knowledge, no content being processed on remote servers without your configuration, and no AI features required to use Inklings productively. AI is opt-in, at every layer.
This article explains how the two AI integration paths work — the built-in agent and the MCP server — and where semantic search fits into the picture.
The Two Integration Paths
Section titled “The Two Integration Paths”Inklings provides two ways to connect AI to your workspace:
The built-in agent Pro+Agent is a conversation interface embedded in Inklings. You send messages; the agent responds. Behind the scenes, the agent has access to your workspace through a set of tools — it can search your pages, read page content, create new pages, and more. You bring your own API key (BYOK) for an LLM provider; Inklings provides the execution environment, tools, and workspace context.
The MCP server Pro exposes your workspace to external AI tools through the Model Context Protocol (MCP). Any MCP-compatible AI tool — Claude, Cursor, or another MCP client — can connect to your Inklings workspace over a local HTTP connection, using the same set of tools available to the built-in agent. You configure the connection once; the external tool does the rest.
These two paths serve different workflows. The built-in agent is for in-context creative work: brainstorming, questions about your world, help with continuity problems. The MCP server is for integrating Inklings with tools you already use, or for using an AI client you prefer.
The Built-In Agent
Section titled “The Built-In Agent”The agent operates as a conversation within Inklings. Open the agent panel, type a message, and the agent responds — using your workspace as its source of truth.
The agent’s capabilities come from its tool set. Tools let the agent take actions in your workspace:
| Tool | What it does |
|---|---|
| search_pages | Full-text search across your workspace |
| read_page | Reads a page’s content and metadata |
| get_page_tree | Returns the workspace hierarchy |
| get_backlinks | Lists pages that link to a target page |
| create_page | Creates a new page |
| update_page_content | Edits a page’s content |
| move_page | Moves a page in the hierarchy |
| rename_page | Renames a page (propagates to all links) |
| delete_page | Moves a page to trash |
The agent can chain tool calls across multiple turns. Ask it to “read all the chapter notes and identify subplots that dropped off” — it will search, read, and synthesize across multiple pages in a single response.
Agent team
Section titled “Agent team”Behind the conversation interface, the agent operates as a team. The Orchestrator is the agent you talk to — it coordinates specialists who handle specific tasks. A Researcher investigates your workspace (read-only), a Worker executes changes, and an Archivist manages what the agent remembers across sessions. You interact with the Orchestrator; the team works behind the scenes.
Memory
Section titled “Memory”The agent maintains persistent memory across conversations. Facts about your world, your preferences, and patterns it notices are stored locally and retrieved when relevant. See Agent Memory.
BYOK: Bring Your Own Key
Section titled “BYOK: Bring Your Own Key”You provide the API key. Navigate to Settings → Agent → Configure Provider, select your LLM provider (Anthropic, OpenAI, xAI, Ollama, or OpenRouter), and enter your key. Inklings stores the key in your OS keychain — it’s not stored in the database or transmitted to Inklings servers.
When the agent runs, your API key is used to make requests to the provider you’ve configured. Inklings assembles the requests (system prompt, conversation history, tool schema, workspace context) and routes them to your provider. The LLM response comes back to Inklings; the agent processes it and executes any tool calls.
This means: no Inklings-managed AI subscription, no opaque model selection, and no data going through an Inklings relay. The request goes from your machine to your chosen provider, directly.
Agent Permissions
Section titled “Agent Permissions”The agent operates within the same permission model as everything else in Inklings. Tools that require write access (create, update, delete) are gated by the PagesWrite capability. Tools that read are gated by PagesRead. When you run as the workspace owner — which is the normal case for local workspaces — you have all capabilities.
The agent’s tool schema shows which tools are available and which are unavailable given the current permission context. Unavailable tools appear with a note in the schema; if the agent attempts to call one, the call is rejected before it reaches the workspace.
The MCP Server
Section titled “The MCP Server”The MCP server runs inside Inklings as a local HTTP service, bound to 127.0.0.1 on a configurable port (default 7862). No external network access is required; the server is only reachable from your own machine.
Enable the MCP server from Settings → Integrations → MCP Server. When enabled, a bearer token is generated — a 64-character cryptographic key that protects the /mcp endpoint. Copy the token and the connection URL from settings, then configure your MCP client with both.
The MCP server exposes:
- The same workspace tools available to the built-in agent
- A workspace tree resource (
inklings://workspace/tree) - Page resources (
inklings://workspace/page/<slug>) - A search resource (
inklings://workspace/search?q=<query>)
External tools connect, authenticate, and then have access to your workspace through the MCP protocol. From the tool’s perspective, Inklings is a workspace server. From your perspective, your external AI tool can now read and write your Inklings pages.
The MCP server starts automatically when you open a workspace (if it was enabled when you last closed it) and stops when you close the workspace or quit the app.
Semantic Search
Section titled “Semantic Search”Semantic search — finding pages by meaning rather than exact words — is built into Inklings and requires no AI provider configuration. It runs locally using a built-in AI model.
When you search your workspace, results combine full-text search (exact and prefix word matches) with meaning-based matching. The combination is ranked using a scoring algorithm that weights title matches highest, then tag matches, then content matches.
Semantic search doesn’t send your content anywhere. The embedding model runs on your machine, processes your pages locally, and stores the results in your workspace database. No API key, no cloud service.
Feature Tiers
Section titled “Feature Tiers”| Feature | Tier |
|---|---|
| All pages, editor, hierarchy, tags, types, search | Free |
| Semantic search | Free |
| MCP server | Pro |
| Built-in agent (conversation, tools) | Pro+Agent |
| Configure LLM providers | Pro+Agent |
| Skills marketplace | Pro+Agent |
The Free tier is fully functional for all PKM and creative writing workflows. AI features are additive.
See Also
Section titled “See Also”- Getting Started with Agent — Setting up the built-in agent with your LLM provider
- Agent Memory — How the agent remembers context across sessions
- MCP Server — Configuring the MCP server for external AI tools
- Agent Tools — The complete list of tools available to the agent
- Agent Skills — How skills specialize the agent for different tasks
- Local-First Storage — How your workspace data is stored and protected
Was this page helpful?
Thanks for your feedback!