Welcome to the official docs for ATOM — your local AI copilot that runs on your own machine.
No cloud. No accounts. No limits.
🧠 What is ATOM?
ATOM is a local-first LLM framework built for:
- ✅ File chunking + memory storage
- ✅ Modular tool use via
::tool:
syntax - ✅ Local LLMs via Ollama (Gemma, Mistral, etc.)
- ✅ Real-time context injection and memory recall
- ✅ Fully customizable frontend (Vite + React)
⚡ Features
Feature | Status |
---|---|
Local LLMs via Ollama | ✅ |
File chunking + vector memory | ✅ |
Tool calling (reflection, TTS, etc.) | ✅ |
Summarization engine | ✅ |
Streaming support | 🟡 (coming soon) |
Fully local — no cloud required | ✅ |
🛠 Getting Started
→ Head to the Install guide to get ATOM running on your system in minutes.
📂 What You'll Learn
This doc site covers:
- Install → Setup backend + frontend
- Memory → How memory works + how it's stored
- Tools → Plug-and-play tools with examples
- Dev Notes → Contributing, forking, and structure
ATOM isn't just a chatbot. It's your own AI system, built to serve you — and no one else.