Crush is an agentic AI coding assistant designed for developers who want to integrate large language models directly into their terminal workflow. Built by Charm Bracelet, it bridges the gap between LLMs and real development environments by providing context-aware code suggestions, tool execution, and seamless integration with Language Server Protocols (LSPs) and Model Context Protocol (MCP) servers. Unlike generic chat-based AI tools, Crush operates natively in your terminal, respects your project structure via .gitignore and .crushignore files, and adapts to your existing tooling—making it ideal for developers who value workflow continuity over browser-based interfaces. It’s built on the robust Charm ecosystem and supports a wide array of LLM providers, making it a versatile choice for teams and individuals working across cloud platforms and private models.
What You Get
- Multi-Model Support - Crush allows you to switch between OpenAI, Anthropic, Groq, Vercel AI Gateway, Google Gemini, Azure OpenAI, Amazon Bedrock, Hugging Face, and more—all within the same session—using API keys or environment variables.
- Session-Based Context Management - Maintain multiple independent coding sessions per project, preserving conversation history and context without interference between tasks.
- LSP-Enhanced Context - Leverages Language Server Protocols (like gopls, nil, or typescript-language-server) to provide accurate code context for LLMs based on your actual project structure and type information.
- MCP Integration - Extend Crush’s capabilities via Model Context Protocol servers using stdio, HTTP, or SSE transports; supports environment variable expansion (e.g., “Authorization”: “Bearer $(echo $GH_PAT)”) for secure tool integration.
- Extensible Configuration - Configure Crush via JSON files (.crush.json, crush.json, or ~/.config/crush/crush.json) to define LSPs, MCPs, tool permissions, and global settings with schema validation.
- Cross-Platform Terminal Support - Runs natively on macOS, Linux, Windows (PowerShell/WSL), Android, FreeBSD, OpenBSD, and NetBSD with consistent behavior across environments.
- .crushignore Support - Exclude files from LLM context using a .gitignore-compatible syntax, ensuring sensitive or irrelevant files don’t pollute AI prompts.
- Tool Permission Control - Define allowed tools (like ls, grep, edit) in config or use —yolo flag to bypass permission prompts—enabling automation while maintaining security boundaries.
Common Use Cases
- Building a multi-tenant SaaS dashboard with real-time analytics - Use Crush to generate and refactor Go code for backend services while leveraging LSPs for type safety and MCPs to fetch live analytics data from an internal API via SSE.
- Creating a mobile-first e-commerce platform with 10k+ SKUs - Integrate Crush with an MCP server to query product databases and generate frontend React components based on real-time inventory data, all within your terminal without leaving your workflow.
- Problem: AI tools don’t understand your codebase → Solution: Crush uses LSPs to analyze your actual project files, then feeds accurate context into the LLM for precise code generation and refactoring - No more hallucinated function signatures or incorrect imports.
- DevOps teams managing microservices across multiple cloud providers - Configure Crush to use AWS Bedrock for Claude prompts, Azure OpenAI for internal teams, and Groq for low-latency prototyping—all from one CLI tool with centralized config management.
Under The Hood
Crush is a Go-based framework that empowers AI agents to interact with codebases and file systems through a structured set of tools and workflows. It emphasizes agentic behavior, enabling automated code manipulation and retrieval using various LLM providers.
Architecture
Crush follows a modular, layered architecture designed for extensibility and agent-driven workflows.
- The codebase is organized into clear modules such as agents, tools, and prompts, each with distinct responsibilities
- It implements a coordinator pattern to orchestrate agent behavior and tool execution
- The architecture supports pluggable providers for different LLMs, enabling flexibility in agent configuration
- Extensive use of templates and configuration files allows for dynamic prompt generation and tool behavior customization
Tech Stack
Built with Go as the primary language, leveraging modern tooling and frameworks for robust development.
- The project uses Go modules and standard library components for core functionality
- It integrates with various LLM providers through configurable agent tools and prompt templates
- Testing is handled via Go’s built-in testing framework with extensive test data sets
- Taskfile and GoReleaser are used for build automation and release management
Code Quality
Crush maintains a high level of code quality with consistent patterns and strong test coverage.
- Comprehensive test suites cover various agent behaviors and tool interactions
- Error handling is centralized through dedicated error types and consistent logging practices
- Code follows idiomatic Go conventions with clear naming and well-defined interfaces
- Template-based prompt generation ensures consistency in agent communication
What Makes It Unique
Crush stands out with its focus on agentic workflows and extensible tooling for code interaction.
- It introduces a sophisticated agent coordination system that enables complex multi-step tasks
- The tool ecosystem is highly modular, allowing for easy addition of new tools or LLM integrations
- It supports multiple AI providers through a unified interface, making it adaptable to different environments
- The use of structured test data and template-driven prompts enables reproducible agent behavior across configurations