Kores OS

The operating system for AI-native businesses.

The Problem

AI agents are fast, but they hallucinate when they lack deep, structural context. Unstructured files aren't enough.

01

Context Loss

Standard agents read one file at a time, missing the broader architectural principles and business logic.

02

Stale Patterns

Documentation decays. Without a living object graph, AI falls back on generic, outdated training data.

03

Tool Blindness

Scripts and deployment commands are buried in wikis instead of exposed as executable capabilities.

How Kores OS Solves This

01

Object-Oriented RAG

Instead of chunking text blindly, Kores OS maps your agency into an explicit relational graph. When an agent asks about your architecture, it receives the exact typed schema.

02

Agent Skill Registry

It exposes internal workflows directly into the LLM context via the MCP protocol. Agents don't just read; they execute.

03

Append-Only Memory

Maintains runtime logs (log.md, wip.md) allowing context transfer between independent coding sessions. No more starting from scratch.

04

Local-First Dashboard

A fast React client that visualizes your entire semantic workspace locally. It requires zero cloud databases and keeps all IP secure.

05

Extensible Architecture

Built on a modular pipeline that can ingest any file format, API response, or web hook, ensuring your knowledge graph grows alongside your technology stack.

Real-World Benchmark

Four real tasks: Lead qualification, Content creation, GTM planning, Tech audit. Warm scores measured from live retrieval. Cold scores are baseline.

2/12
Standard Agent
12/12
Kores OS
Live Demo

Your workspace, structured.

A real Kores OS output. Browsable. Queryable. Generated from a live 7,528-file workspace.

Powered by real data

Quick Start

Install the daemon and bootstrap your workspace within minutes.

git clone https://github.com/JaxsonDLauw/kores-os cd kores-os/kores-knowledge-os pip install -e . cp configs/kores-local.example.json configs/kores-local.json # Edit kores-local.json — set workspace_root to your path python run.py bootstrap cd ../dashboard npm install npm run dev
Any model. Your data.

Runs on Ollama. Falls back to any API.

One environment variable separates local inference from cloud. The knowledge layer doesn't change when your provider does.

Local — Ollama
Your workspace data never leaves your machine. Run Llama 3.2, Qwen3, or Gemma 3 locally via Ollama. Zero API costs. Full privacy. Works offline.
# Start local inference ollama serve python run.py chat \ "qualify Ashurst Melbourne as a client" # loads ICP, research-template, disqualifiers # responds with real context, not generic output
Cloud — Any API
Point at Claude, GPT, or Gemini by changing one environment variable. Same context injection. Switch providers without touching your knowledge graph.
# Switch to cloud provider export KORES_LLM_BASE_URL=\ https://api.anthropic.com/v1 export KORES_LLM_MODEL=claude-sonnet-4-6 python run.py chat \ "plan GTM outreach for litigation firms" # same context injection. different model.
Real scenarios

Where Kores OS changes the outcome

Not theoretical. These are the four situations where starting without context costs you the most.

The developer with three AI tools
Every session you re-explain your stack to Claude, then to Cursor, then to ChatGPT. The same context block, written 200 times. Kores OS runs once and produces CLAUDE.md, .cursorrules, and llms.txt from your actual workspace. Every tool gets full context on load. You stop re-explaining.
The agency managing multiple clients
Eight clients. Eight ICPs. Eight brand voices. One misloaded config and your agent writes the wrong brand voice to the wrong client. Kores OS keeps each client workspace separate. Bootstrap once per client. Agents load the right context automatically. No manual file loading before every task.
The compliance-heavy business
Legal, healthcare, finance. Your documents cannot go to the cloud — GDPR, privilege, HIPAA. Kores OS runs entirely locally. Pair it with Ollama and a local model and your agents reason about sensitive documents on-premise. No data leaves the building. The benchmark scores don't change.
The founder who can't afford to lose context
Decisions made in January affect architecture in April. Your agents don't know why you made those choices — so they undo them. Kores OS extracts every architectural decision as a typed object. Agents consult the decision graph before acting. They stop working against your past self.
Three ways to get started

Set up Kores OS your way

Choose the path that matches how you work.

01 — Self-serve
Read the docs

Full documentation covering every command, config option, and connector. For developers who prefer to read and do it themselves.

Read docs ↗
Recommended
02 — Guided wizard
Generate your setup prompt

Answer three questions. Get a complete prompt. Paste it into Claude Code, Codex, Cursor, or Antigravity — your agent installs and configures everything, scans your machine, and wires every tool you use.

03 — Done for you
Book a call with Kores

We configure Kores OS for your business — your tools, your connectors, your workflow. Built and run by the agency that created it.

Book a session →
Built for your stack

Want to know exactly
where Kores OS fits?

Describe your current setup — what tools you use, how your agents work, what keeps breaking. We'll map exactly where Kores OS plugs in and what changes on day one.

No pitch. No deck. Just a direct answer.
Most AI products are made of AI.
Kores OS is made
for AI.
The model is Coke. We're the fridge.
Ollama, Claude, Cursor, Codex — they all work better when they start with a structured, queryable workspace. That's what Kores OS builds.

Generate your setup prompt

Leave blank and the agent will scan your machine automatically.