View on GitHub

Codebase Memory for AI Agents

Local workspace indexing with semantic search. Give your AI agents long-term memory of your entire codebase — fast, private, and fully local. GPU acceleration supported if available.

100K+ Files Supported
GPU Optional Acceleration
100% Local & Private

The Problem: AI Agents Forget Everything

AI coding agents lack long-term memory of your codebase. Every session starts from scratch.

🧠 No Context Retention

AI agents forget your codebase structure after each session. No persistent memory across conversations.

🔍 Slow Code Search

Finding relevant code requires manual searching. AI agents can't quickly locate similar patterns.

📦 Large Codebases

Enterprise codebases with 100K+ files are impossible to fit in context windows.

🔒 Cloud Dependencies

Cloud-based indexing sends your code to third-party servers. Privacy and IP concerns.

How Mnemotech Works

Local workspace indexing with GPU-accelerated embeddings for fast semantic search.

📁 Your Workspace

Code files, documentation, configs. Any text-based content.

↓ Index

🧠 Embedding Model

Light embedding model with GPU acceleration. Converts code to vectors.

↓ Store

💾 LanceDB + SQLite

Vector database for embeddings. SQLite for metadata and queue management.

↓ Search

🔍 Semantic Search

Fast similarity search. Find relevant code by meaning, not just keywords.

Key Features

Everything you need for codebase memory and semantic search.

🧠

Workspace Indexing

Automatically index your entire codebase. Supports any text-based files.

  • Recursive directory scanning
  • Smart exclusion (node_modules, .venv, etc.)
  • Incremental updates
  • Queue-based processing

GPU Acceleration (Optional)

Light embedding models with optional GPU support. Falls back to CPU if no GPU is present.

  • CUDA support for NVIDIA GPUs
  • Batch processing for speed
  • Memory-efficient embeddings
  • CPU fallback if no GPU
🔍

Semantic Search

Find code by meaning, not just keywords. Vector similarity search.

  • LanceDB vector database
  • Similarity scoring
  • Top-K results
  • Metadata filtering
🔒

100% Local & Private

Your code never leaves your machine. No cloud dependencies.

  • Local SQLite database
  • Local LanceDB storage
  • No external API calls
  • Full data sovereignty
🛠️

OpenClaw Integration

Built as an OpenClaw skill. Seamlessly integrates with your AI agents.

  • Skill-based architecture
  • CLI and API access
  • Queue management
  • Debug logging
🔌

Open Source

MIT licensed. Fork it, customize it, contribute back.

  • GitHub: github.com/aldow3n-a11y/mnemotech-skill
  • Active development
  • Community contributions welcome
  • Regular updates

Tech Stack

Built with modern, efficient technologies for fast indexing and search.

🐍
Python 3.8+
Core implementation with standard library
💾
SQLite
Metadata storage and queue management
🚀
LanceDB
Vector database for embeddings
GPU (CUDA)
NVIDIA GPU acceleration for embeddings
🧠
Embedding Models
Light models for code embeddings
🦞
OpenClaw
AI agent gateway integration

Give Your AI Agents Long-Term Memory

Index your codebase locally. Enable fast semantic search. Works on any machine — GPU optional for faster indexing.

📦 View on GitHub 🏠 Back to Portfolio

✅ Works with any text-based files
✅ GPU optional — falls back to CPU automatically
✅ 100% local — no cloud dependencies
✅ Open source (MIT license)