Trench is an open-source analytics infrastructure designed to handle high-volume event tracking at scale, built on Apache Kafka for durable event ingestion and ClickHouse for real-time querying. It was created by Frigade to power their own product analytics needs and is optimized for teams that want full control over their event data without relying on third-party SaaS platforms. Trench is GDPR and PECR compliant, offering users full data access, rectification, and deletion capabilities. With a single Docker image deployment and Segment API compatibility, it serves as a drop-in alternative to tools like PostHog, Matomo, or Plausible—ideal for engineering teams building custom analytics products, LLM RAG systems, or observability platforms.
Unlike many analytics tools that abstract away infrastructure complexity, Trench gives developers direct access to the underlying data pipeline. This makes it especially valuable for organizations with strict data residency requirements, those needing to integrate analytics deeply into custom workflows, or teams building event-driven applications that require low-latency query performance. The system supports both self-hosted deployment for full control and a managed cloud option for zero-ops scaling.
What You Get
- Segment API Compatibility - Trench implements the Segment Track, Group, and Identify APIs out of the box, allowing seamless migration from tools like Segment, PostHog, or Amplitude without changing client-side code.
- Single Docker Deployment - Everything needed—Kafka, ClickHouse, and the Node.js event processor—is bundled in one production-ready Docker Compose setup with minimal configuration.
- Real-Time Event Querying - Events are immediately available for querying via HTTP API or raw SQL through the /events and /queries endpoints, enabling dashboards and analytics with sub-second latency.
- Kafka & ClickHouse Integration - Events are ingested via Kafka and stored in ClickHouse, enabling high-throughput processing (thousands of events/sec per node) with efficient columnar storage and aggregation.
- GDPR/PECR Compliance - Built-in data control features allow users to access, rectify, or delete their event data; no cookies are used for tracking.
- Webhook Integrations - Events can be forwarded to external systems via configurable webhooks for downstream processing or alerting.
- SASL and SSL Kafka Authentication - Supports secure Kafka connections with SASL_PLAINTEXT, SASL_SSL, and mutual TLS via environment variables and pre-configured ClickHouse XML configs.
- Raw SQL Query Endpoint - Direct access to ClickHouse via POST /queries allows complex analytics, aggregations, and custom metrics without building a UI layer.
Common Use Cases
- Building a self-hosted product analytics dashboard - A SaaS company replaces Mixpanel with Trench to avoid vendor lock-in, using the Segment-compatible API to track user events and visualizing them in Grafana via ClickHouse queries.
- Creating a real-time user behavior monitoring system for LLM RAG applications - An AI startup uses Trench to log user prompts and feedback events, then analyzes patterns in ClickHouse to improve retrieval quality without exposing data to third parties.
- Problem: Needing real-time event analytics without cloud dependency → Solution: Trench - A healthcare app requires all user data to remain on-premises; Trench provides a compliant, scalable event pipeline with full data ownership and no external dependencies.
- Team: DevOps teams managing microservices across hybrid clouds - Engineers use Trench to collect application logs and custom metrics from multiple services, storing them in ClickHouse for cross-service user journey analysis with low-latency queries.
Under The Hood
Frigade Trench is a data infrastructure platform designed to handle real-time event streaming, analytics, and workspace management through a unified API. It integrates modern backend technologies to support scalable event-driven architectures with strong emphasis on modularity and extensibility.
Architecture
The system adopts a monolithic NestJS architecture with well-defined modules centered around core domains such as events, queries, and webhooks. This structure promotes separation of concerns and reusability through NestJS’s dependency injection and module system.
- Modular design with clear boundaries between event handling, query processing, and webhook management
- Consistent use of DAOs, services, and controllers to enforce layered architecture principles
- Strong adherence to SOLID principles and service-oriented design patterns
Tech Stack
The platform is built using TypeScript and NestJS, leveraging modern backend tools for scalable and maintainable development.
- Built with TypeScript and NestJS, utilizing Express and Fastify for HTTP handling
- Integrates KafkaJS for real-time event streaming and ClickHouse client for high-performance analytics
- Employs Turbo for monorepo management, Docker Compose for deployment orchestration, and Jest for testing
- Extensive use of NestJS modules including Swagger and Cache Manager for enhanced functionality
Code Quality
The codebase demonstrates a solid testing strategy and consistent architectural practices, although some technical debt remains.
- Comprehensive test suite covering unit and end-to-end scenarios with good coverage across modules
- Standardized error handling through try/catch blocks and custom exception types
- Maintainable naming conventions and architectural patterns with some duplication present
- Linting and CI/CD pipelines in place to enforce code quality standards
What Makes It Unique
Frigade Trench distinguishes itself through its integration of real-time data systems with robust analytics capabilities and a developer-friendly extensible model.
- Native support for Kafka and ClickHouse enables high-throughput event ingestion and analytical querying
- Modular microservice architecture built on NestJS allows for clean separation of concerns across domains
- Workspace model with property-based configuration supports flexible multi-tenant use cases
- Developer-centric design with openAPI spec generation and comprehensive documentation enhances adoption