Knowledge your AI can actually
use.
GnosisLLM is the knowledge layer that connects your proprietary data — docs, code, research — to any AI assistant. Semantic search, MCP server generation, read-and-write memory, enterprise-grade security. One library, every product.
At a glance
The Knowledge Layer for AI
- Status
- Library — used internally across the Neomanex portfolio
- Built for
- Platform engineers and AI teams building RAG or multi-agent systems
Why it matters
Positioning pillars
Collections of Knowledge
One source of truth for AI.
Upload any knowledge — documentation, code, research, policies, transcripts. GnosisLLM organises it into collections, handles chunking, and produces embeddings automatically. Enterprise knowledge stops living in silos and starts living in the AI.
Semantic Search & Hybrid Retrieval
Answers from your data.
Query by meaning, not keyword. Hybrid retrieval combines vector similarity with lexical search so technical terms, product names, and numbers all return the right chunks. Grounded responses, not hallucinations.
MCP Servers, Read and Write
Every agent speaks to your knowledge.
GnosisLLM generates an MCP server per collection. Any MCP-compatible client — Claude Desktop, Claude Code, Gnosari, ConvOps — can query and update the knowledge base. Read-AND-write bidirectionality is the point; agents enrich the knowledge while they use it.
Shared Foundation
Not customer-facing. Load-bearing.
GnosisLLM is what Gnosari, ConvOps, and NeoRouter reach for when they need embeddings, retrieval, or memory. One library; every product that needs a knowledge layer consumes the same one. Consistency across the portfolio, not duplicate infrastructure.
The mechanics
How it works
Step 1
Install the library
`pip install gnosisllm-knowledge` — or clone from GitLab for source builds. Python-first; the library ships as an installable package with a documented API surface.
Step 2
Ingest a collection
Point GnosisLLM at your source — markdown, PDFs, codebases, webpages (via NeoReader). The library handles chunking, embedding, and indexing. Every ingestion is idempotent; re-runs update, they do not duplicate.
Step 3
Expose via MCP or API
A single command generates an MCP server for your collection. Agents can now query it from any MCP-compatible client. Or hit the HTTP API directly for non-MCP integrations.
Neomanex portfolio (internal adoption)
Every Neomanex product with a knowledge layer — Gnosari, ConvOps, NeoRouter — runs on GnosisLLM. Production-proven across five products and millions of chunks. A library battle-tested in the same code it ships alongside.
The contrast
How we compare
Them
Build your own RAG pipeline
Us
One library handles ingestion, chunking, embeddings, hybrid retrieval, and MCP exposure.
Them
Hosted RAG service with lock-in
Us
Open-source library; you own the data, the index, and the deploy.
Them
Read-only retrieval
Us
Read AND write. Agents enrich the knowledge base while they use it.
The ecosystem
Fits into the portfolio
Related products
Questions, answered

