Local workspace indexing with semantic search. Give your AI agents long-term memory of your entire codebase — fast, private, and fully local. GPU acceleration supported if available.
AI coding agents lack long-term memory of your codebase. Every session starts from scratch.
AI agents forget your codebase structure after each session. No persistent memory across conversations.
Finding relevant code requires manual searching. AI agents can't quickly locate similar patterns.
Enterprise codebases with 100K+ files are impossible to fit in context windows.
Cloud-based indexing sends your code to third-party servers. Privacy and IP concerns.
Local workspace indexing with GPU-accelerated embeddings for fast semantic search.
Code files, documentation, configs. Any text-based content.
Light embedding model with GPU acceleration. Converts code to vectors.
Vector database for embeddings. SQLite for metadata and queue management.
Fast similarity search. Find relevant code by meaning, not just keywords.
Everything you need for codebase memory and semantic search.
Automatically index your entire codebase. Supports any text-based files.
Light embedding models with optional GPU support. Falls back to CPU if no GPU is present.
Find code by meaning, not just keywords. Vector similarity search.
Your code never leaves your machine. No cloud dependencies.
Built as an OpenClaw skill. Seamlessly integrates with your AI agents.
MIT licensed. Fork it, customize it, contribute back.
Built with modern, efficient technologies for fast indexing and search.
Index your codebase locally. Enable fast semantic search. Works on any machine — GPU optional for faster indexing.
✅ Works with any text-based files
✅ GPU optional — falls back to CPU automatically
✅ 100% local — no cloud dependencies
✅ Open source (MIT license)