Second Me is an open-source AI identity system that enables users to create a personalized, locally trained AI self—your digital twin—by feeding it personal memories, voice notes, and reflections. Unlike centralized AI assistants, it prioritizes user ownership and privacy, ensuring your data never leaves your device unless you choose to share it. Built for tech enthusiasts, researchers, and privacy-conscious users, it bridges personal AI augmentation with social discovery.
Technically, Second Me leverages Qwen2.5 base models, llama.cpp for efficient local inference, and GraphRAG for memory synthesis. It supports Docker and integrated deployments on Windows, macOS, and Linux, with MLX acceleration for Apple Silicon. The system uses Hierarchical Memory Modeling (HMM) and the Me-Alignment Algorithm to preserve identity coherence, and connects to a decentralized network for permissioned AI-to-AI interactions.
What You Get
- AI-Native Memory with HMM - Uses Hierarchical Memory Modeling and Me-Alignment Algorithm to capture and structure your personal memories, reflections, and voice notes into a coherent AI identity.
- Local Training & Hosting - All AI training and inference run locally on your machine using llama.cpp and Qwen2.5 models, ensuring full data control and privacy.
- Voice Note & Audio Sync - Import and process voice recordings from meetings or spontaneous thoughts to train your AI self’s voice and conversational style.
- AI Social Network - Share your AI self with others on a permissioned network to enable AI-to-AI collaboration, roleplay, and identity-based discovery.
- Roleplay Mode - Your AI self can switch personas to represent you in simulated scenarios like AMAs, speed dating, or brainstorming sessions.
- End-to-End Encryption & Confidential Computing - All data is encrypted in transit and at rest, with homomorphic encryption and Azure Confidential Computing integration to prevent unauthorized access.
- Memory Version Control - Future updates will include versioning of memory states to track how your AI self evolves over time.
- Cross-Platform Support - Deploy via Docker or integrated setup on Windows, macOS (including MLX acceleration for M-series chips), and Linux.
Common Use Cases
- Personal AI Identity Creation - A writer uses Second Me to train an AI twin from their journal entries and voice memos, then shares it with editors to simulate how they’d respond to feedback.
- AI-Powered Social Discovery - A therapist uses their AI self to connect with other therapists on the Second Me network, exchanging insights through AI-mediated conversations without revealing personal identity.
- Privacy-First Digital Legacy - A retiree creates a Second Me from decades of audio diaries and photos, preserving their voice and personality for family to interact with after they’re gone.
- AI Roleplay for Professional Practice - A lawyer trains their AI self on case notes and legal reasoning to roleplay client consultations, helping them refine communication before real meetings.
Under The Hood
Architecture
- Clear separation of frontend and backend via Dockerized microservices with well-defined API boundaries and port mappings
- Backend employs a layered architecture with Flask-RESTful endpoints, Pydantic validation, and async SQLAlchemy ORM to decouple HTTP handling from business logic
- Modular component structure supports platform-specific builds through dedicated Dockerfiles and Makefile conditionals for CPU/GPU optimization
- Persistent volumes maintain state for AI model artifacts, ensuring reproducible deployments across environments
- WebSocket support and integration with LangChain/ChromaDB form a cohesive pipeline for conversational AI and retrieval-augmented generation
Tech Stack
- Python 3.12+ backend powered by Flask, Flask-Sock, and Flask-Pydantic with async SQLAlchemy and Pydantic for robust data handling
- Frontend built with Next.js and Vite, leveraging Ant Design and Zustand for state management and seamless API communication
- Dockerized deployment with GPU-optimized images, CUDA support, and platform-specific configurations for Apple Silicon and Linux
- Comprehensive ML stack including Hugging Face transformers, sentence-transformers, ChromaDB, and LangChain for end-to-end AI orchestration
- Build and dependency management handled via Poetry and dynamic docker-compose selection, with environment-aware configuration for local and containerized execution
Code Quality
- Extensive test coverage with unit tests validating complex parsing logic and quantization behavior across edge cases
- Strong type safety and low-level system integration via ctypes bindings to C libraries for precise algorithm verification
- Modular test structure with dedicated files for metadata and quantization components, though some lack full documentation or assertions
- Robust error handling with custom exceptions and defensive patterns to maintain stability during ambiguous model identification
- Consistent naming and logging practices, but limited use of static analysis or linting tools despite system complexity
What Makes It Unique
- Introduces a dynamic digital identity system where personal behavior and memory are trained into the AI, not just statically stored
- Visual training progression in the StatusBar transforms abstract AI training into an intuitive, user-visible lifecycle
- Real-time document processing with embedded embedding status tracking enables seamless memory enrichment without external dependencies
- Live interaction during training via BridgeMode and PlaygroundChat creates a feedback loop that adapts personality in real time
- Roleplay and custom tooling are first-class features, not add-ons, enabling deeply personalized AI interactions rooted in user behavior