Documentation
Kores OS is a lightweight knowledge orchestration layer that structures your ideas, captures your architecture, and ensures every AI session picks up exactly where the last one left off.
Why Kores OS? (Market Comparison)
Most AI agents hallucinate context, ignore old architectural decisions, and require you to explain your stack every single time you open a new session. We built Kores OS to fix these specific enterprise bottlenecks while outperforming traditional architectures.
| Tool / Approach | Performance & Latency | Context Precision | Best For (Use Cases) |
|---|---|---|---|
| Raw LLMs (ChatGPT, Claude Web) |
Slow setup: Requires 10-30 mins per session to re-explain architecture, rules, and current state. | None: Zero cross-session retention. Completely bounded by the current conversation window. | One-off scripts, isolated functions, brainstorming. Not viable for long-term project iteration. |
| Standard RAG (LangChain, Vector DBs) |
High inference latency: Dynamic vector similarity searches (dense embeddings) add unpredictable delay per query. | Fuzzy / Inconsistent: Vulnerable to "lost in the middle" syndrome. Pulls overlapping chunks without strict structural relationships. | Q&A over massive, unorganized documentation. Not ideal for code where logic requires deterministic accuracy. |
| Agentic IDEs (Cursor, Windsurf) |
Fast execution: Low latency editing, but limited to the current user's local context and open files. | Good, but siloed: Fails to enforce organizational rules, strategic goals, or cross-team architectural decisions consistently. | Solo developers. Rapid prototyping. Breaks down when multiple agents or devs need shared, strict project state. |
| Kores OS | Zero inference latency: 20s bootstrap pre-compiles context into static files (e.g. .cursorrules, CLAUDE.md) read instantly by agents. |
Deterministic absolute: Adapters explicitly feed the exact schema of rules, memories, and skills directly into the LLM system prompt. | Agency operations, enterprise multi-agent workflows, compliance-heavy isolated environments, and massive codebases. |
Performance Advantage
By shifting from Dynamic Search (RAG) to Static Compilation (Kores OS), we eliminate the embedding compute overhead during active pairing sessions. Instead of hoping a vector search retrieves the right file chunk, Kores OS compiles your business logic deterministically into standard formats exactly when the project changes, guaranteeing the agent perfectly comprehends the entire architecture overhead-free.
Quick Start
Install Kores OS on your local machine and wire it into your tools in under a minute.
git clone https://github.com/JaxsonDLauw/kores-os
cd kores-os/kores-knowledge-os
pip install -e .
python run.py install # Scans machine, builds ~/.kores/
cp configs/kores-local.example.json configs/kores-local.json
# Edit configs/kores-local.json to set your workspace_root path...
python run.py bootstrap # Catalogs workspace & generates adapters
Once bootstrapped, adapter files are automatically placed at the root of your workspace.
One more step for Claude Desktop users: Open Claude Desktop → Settings → Profile → Personal Preferences. Add this instruction: "At the start of every conversation, read CLAUDE.md and AGENTS.md from your workspace root using your filesystem tools. Use them to inform every response." This tells Claude to read your context automatically on every new session.
Core Concepts
Workspace Cataloging
Kores OS recursively scans your project folder, detecting languages, frameworks, and architecture patterns. It outputs a deterministic graph of your codebase.
Memory & Wip
Instead of keeping task states in chat histories, Kores OS enforces a wip.md and pending.md state machine. When an agent opens your repo, it checks the pending tasks immediately.
Skills
Automate repetitive behaviors. A skill is a standardized YAML + Markdown file that agents parse to learn how to execute tasks securely (e.g., SEO drafting, PR reviews, code audits).
CLI Reference
Run these commands using python run.py [COMMAND] from the kores-knowledge-os root.
install: Scans your machine for AI tools and projects. Builds~/.kores/machine.mdwith detected tools, skills, and recent projects. Run once on first install. No config file required.update-machine: Refreshes~/.kores/machine.mdfrom current machine state. Run after installing new AI tools.bootstrap: Full rebuild. Catalogs workspace, discovers skills, runs linting, and regenerates adapters.generate-adapters: Quickly regeneratesCLAUDE.md,.cursorrules,llms.txt, etc., and places them in your workspace root.place-adapters: Copies generated adapter files to your workspace root. Useful if you need to re-place without running a full bootstrap.query "topic": Semantic and exact-match search against your catalog.chat "question": AI chat powered by your workspace context (requires LLM setup).sync-shared: Syncs high-confidence canonical knowledge to~/.kores/shared/so new projects start with context from previous projects.benchmark: Runs the 4-task evaluation string to test retrieval performance.lint: Checks your Kores OS files for contradictions or staleness.
Adapters Guide
Kores OS doesn't force you to use a proprietary editor. It builds context files natively readable by prevailing market tools.
- .cursorrules: Automatically compiled standard rules for Cursor IDE.
- CLAUDE.md: Context injection file for Claude Code CLI and Claude desktop apps.
- llms.txt: General purpose structured prompt text for Codex, Anthropic, or OpenAI APIs.
- ANTIGRAVITY.md: Specialized routing files for Antigravity.