Platform · Company Brain · Knowledge GraphEnterprise
Your company brain — read by every agent.
Every document, plan, and improvement your team writes is fed automatically into a living company brain. Claude, Cursor, ChatGPT, Gemini, internal agents — every connected agent reads the same brain over MCP, reaches the right answer faster, and admits when the brain doesn't know instead of hallucinating. No skill files. No per-tool context drift. Self-learning, fully automated, RBAC end-to-end.
Enterprise plan only
What it is
A self-learning brain for your company. Stable Baseline reads every document, diagram, plan, and improvement you write — extracts the entities and the relationships between them, clusters them into themes, and writes wiki pages summarising each theme. The result is a single graph that any MCP-connected agent — public (Claude, ChatGPT, Gemini) or internal — can query directly. No skill files. No per-tool context sync. No ETL.
Underneath, the brain still ships everything Stable Baseline's full-text + vector search has always shipped — those layers stay live and untouched. The Company Brain adds the typed-entity, community, and wiki layers on top, plus a retrieval surface that reaches questions a bag-of-chunks index can't answer.
And — just as important — the brain teaches connected agents when to say “I don't know”. If the graph has no relevant entities for a question, the retrieval surface returns that absence cleanly instead of forcing a confident-but-wrong answer out of the model. Less guessing. More grounding.
No skill files. Ever.
Stop maintaining one context file per agent. Every connected client — public (ChatGPT, Claude, Gemini) or internal — reads the same brain via MCP. Update your docs once; every agent sees it instantly.
Right answers faster — honest when there isn't one
Graph traversal returns a tight, grounded neighbourhood — entities, relations, and summaries — so agents reach the right answer in fewer hops. When the brain has nothing on a question, agents say so instead of hallucinating.
Self-learning, fully automated
Every save triggers an incremental rebuild. No tagging, no curation. The brain stays current as your workspace grows.
Interactive Explorer
WebGL canvas — pan, zoom, and pivot the whole brain to any entity's neighbourhood. Click a node to jump to the source document.
Zero cold-start
No seeded ontology. Your vocabulary emerges from your own content. The types, relationships, and themes are unique to your workspace.
One brain per scope
The brain is built per project within a workspace. That gives you control over how you segment company knowledge:
- One brain per company. Run a single workspace + project for the whole company. Every team feeds it; every agent reads from it. Best for small to mid-sized organisations.
- One brain per department. Spin up a workspace per function — HR, Finance, Operations, Engineering, Sales, Legal — each with its own brain. Departments stay isolated by default, sharable on demand.
- One brain per project. Inside a workspace, each project gets its own scoped brain. Great for confidential projects (M&A, restructure) where even other teammates in the same workspace shouldn't see the content.
- Mixed. The four-gate scope model lets you mix freely — a top-level “company brain” project plus a confidential side project, both in the same workspace, with different agents authorised differently.
RBAC end-to-end
Why it matters
Search alone struggles the moment your question requires connecting ideas that live in different documents — and that's exactly when humans tend to ask for help. The Company Brain is built for those moments. Concrete ways teams use it:
New-hire onboarding
Day one of a senior hire: ask 'what is this project about?' and get the auto-curated wiki summary instead of grep-walking 40 docs.
Architecture discovery
Click any technical term in the Explorer to see every doc, diagram, plan, and improvement that mentions it. Connect the dots without reading everything.
Compliance impact analysis
Walk from one control (e.g. SOC 2 CC6.1) to every artifact that satisfies it — docs, diagrams, evidence, gaps logged as improvements.
RFC / proposal comparison
Ask 'compare proposals A, B, C' and the graph traverses each, surfacing where they overlap, conflict, and which existing entities they each touch.
The KG Explorer
Open the Explorer from any project that has the Knowledge Graph enabled. It renders every entity in scope as a node, every typed relationship as an edge, and the communities discovered by clustering as soft coloured zones. Drag to pan, scroll to zoom, and click any node to focus on its neighbourhood.
Search vs Knowledge Graph
Both retrieval modes coexist on Stable Baseline — the graph never replaces search. Use whichever fits the question. As a rough heuristic:
Vector + full-text search
Best when the answer lives in one place.
Returns the chunks most similar to your query. Already great in Stable Baseline and unchanged when the graph is enabled.
Knowledge Graph
Best when the answer requires connecting ideas.
Walks the entity graph, returning every related artifact regardless of where it lives. Cross-document, cross-modality, with provenance.
They run together
Five layers, one graph
The Knowledge Graph is a five-layer system that sits on top of your existing chunked + embedded content. Each layer adds capability the layer below couldn't serve on its own.
- Layer 1 · Sources & chunks. Every document, diagram, improvement, plan, and task is split into chunks. This layer is what powers the existing full-text + vector search.
- Layer 2 · Entities & edges. An extraction agent reads each chunk and emits typed entities (e.g.
OWASP ASVS,checkout flow,Stripe webhook) and the relationships between them. Both entities and relations are open-vocabulary — the extractor invents the types as it reads, scoped to your org. - Layer 3 · Communities. A Louvain-style clustering algorithm groups densely-connected entities into communities. Each community represents a coherent area of your workspace — “Authentication”, “Compliance scans”, “Billing & credits”.
- Layer 4 · Curated wikis. An LLM writes a short CDMD wiki page for each community summarising its theme. New teammates can read the wiki page for “what is this project about?” instead of grep-walking dozens of docs.
- Layer 5 · Retrieval & agents. The chat surface and MCP tools combine all four layers into a single retrieval pipeline that supports four modes:
local(chunk-centric),global(community summaries + wikis),graph(entity-neighbourhood walk), andpath(shortest path between two entities).
Four-gate scope model
Knowledge Graph never opts you in by default. Every code path — the UI, every edge function, every MCP tool — checks four gates in order. Any single “no” is a hard stop.
- Tier entitlement. Your subscription tier must include the feature. Free and Pro plans are blocked at this gate.
- Org toggle. An org admin must turn the feature on for the organisation in settings.
- Scope walk. Each workspace and project is independently opted in. The scope is walked bottom-up: document → folder → project → workspace → org. The first non-
inheritsetting wins. Default is off. - Storage cap. Each plan has a per-org KG storage cap. Hitting it pauses new builds (it never deletes existing data) and surfaces a warning banner in the UI.
MCP tools
Any MCP-connected agent (Claude Code, Cursor, ChatGPT, etc.) gains four new tools once Knowledge Graph is enabled for the project:
kg_search— unified retrieval across all four modes. The default mode islocalfor chunk retrieval; passmode: "global"for community summaries,mode: "graph"for an entity neighbourhood, ormode: "path"for the shortest path between two entities.kg_get_entity— fetch the full record for a single entity, including its types, aliases, and outgoing/incoming edges.kg_related_documents— list documents connected to a given entity, ranked by edge weight.kg_scope_status— check whether a target (workspace, project, folder, document) is currently in scope, with the reason if it's not.
Enabling the Knowledge Graph
On the Enterprise plan, follow these steps:
- Open Org settings → Features and enable Knowledge Graph at the organisation level. (Requires owner / admin role.)
- In each workspace, open Workspace settings → Knowledge Graph and flip the workspace into scope.
- For each project, leave the default (
inherit→ on once the workspace is on) or override per-project under Project settings → Knowledge Graph. - Open the project's Knowledge Graph tab and click Build. The first build runs the extraction + normalization + clustering + wiki pipeline; subsequent edits trigger incremental updates automatically.
Strictly additive
Security model
- Tenant isolation. Every KG row is org-scoped at the database layer. The same isolation that protects your documents protects your graph.
- No content filtering. Once a document is in scope, every chunk contributes equally — no system-side “skip this” rule. Curators have full control via scope.
- Zero-retention model providers. The agents that perform extraction, normalization, and wiki authoring run through ZDR-contracted model providers. Your content is never used to train a model.