Jitsu is a developer-first, open-source alternative to Segment designed for modern data teams who need full control over their event data pipelines. It enables teams to collect behavioral data from websites, mobile apps, and APIs and deliver it in real-time to their preferred data warehouse without relying on proprietary SaaS tools. Built for scalability and transparency, Jitsu supports self-hosting on Docker, Kubernetes, or cloud infrastructure, and integrates with PostgreSQL, MySQL, Redshift, Snowflake, BigQuery, and ClickHouse.
The platform is powered by Bulker, an open-source ingestion engine, and features a JavaScript-based runtime called Jitsu Functions for transforming events before storage. With native Segment API compatibility, automatic user identity stitching, and support for custom domains, Jitsu eliminates vendor lock-in while maintaining enterprise-grade reliability and performance.
What You Get
- Realtime Event Streaming - Streams user behavioral events from web, mobile, and API sources directly to data warehouses in milliseconds, not hours, enabling near-instant analytics.
- Jitsu Functions - JavaScript-based event transformation engine with access to npm packages, key-value storage, and HTTP clients to filter, enrich, or route events before storage.
- Segment API Compatibility - Fully compatible with Segment’s analytics.js and HTTP API, allowing seamless migration from Segment without code changes.
- Automatic User Identity Stitching - Dynamically constructs real-time user identity graphs by merging anonymous and identified events without complex SQL joins.
- ClickHouse Included - Comes with a free, pre-configured ClickHouse instance for fast, cost-effective data storage and querying, with no additional setup required.
- Custom Domains - Deploy Jitsu on your own subdomain (e.g., data.yourcompany.com) to bypass ad-blockers and ensure reliable event collection.
Common Use Cases
- Replacing Segment in compliance-sensitive environments - A fintech company uses Jitsu to self-host event collection and avoid third-party data sharing, ensuring GDPR and CCPA compliance.
- Building a real-time analytics pipeline for SaaS products - A B2B SaaS startup captures user interactions across web and mobile apps and loads them into Snowflake for product usage dashboards.
- Enabling engineering teams to control data pipelines - A mid-sized tech team replaces Segment with Jitsu to eliminate monthly SaaS costs and add custom event transformations using JavaScript functions.
- Collecting data behind ad-blockers - An e-commerce brand deploys Jitsu on a custom subdomain to capture accurate user behavior data even when users block third-party trackers.
Under The Hood
Architecture
- Monorepo organized into domain-specific workspaces (cli/, libs/, services/, types/, webapps/) with clear separation of concerns and modular service composition
- Prisma ORM serves as the unified data access layer, enforcing consistent type-safe data modeling across services via generated types
- Dependency injection is achieved through workspace references, enabling reusable components and testable service boundaries
- Turbo orchestrates parallelized build, test, and typecheck workflows with dependency-aware task execution
- Frontend and API layers are decoupled, sharing data contracts through a central types directory to ensure consistency without tight coupling
- Custom codegen and build pipelines are deeply integrated, reflecting an automation-first approach
Tech Stack
- Node.js monorepo managed with pnpm and Turbo for optimized, cached builds across multiple packages
- Next.js frontend with Tailwind CSS and React 18, paired with Express-based backend services
- Go-based components handle performance-critical data ingestion and processing
- Prisma with PostgreSQL for relational data, complemented by Kafka for event streaming and Firebase for authentication
- Vitest and Playwright provide comprehensive testing coverage, supported by ESLint and Prettier for code quality
- CI/CD is automated via semantic-release and monorel, with Vercel as the primary deployment target
Code Quality
- Extensive test coverage spanning unit, integration, and edge cases with reusable test utilities
- Clear modular structure with domain-specific components and isolated test suites for destinations, drivers, and data parsing
- Robust error handling through structured logging and context-aware propagation, though custom error classes are underutilized
- Consistent naming conventions across TypeScript and Go, with descriptive function and module names
- Strong type safety in TypeScript via interfaces and guards, while Go relies on structs and interfaces without advanced type system features
- Comprehensive linting and test automation are evident in structure and CI patterns, though configuration files are not always explicit
What Makes It Unique
- Dynamic code compilation system that bundles Node.js modules and ESM/CJS hybrids to run user-defined functions securely in Deno
- Unified schema-driven form engine with real-time validation and automatic default value injection, eliminating manual form boilerplate
- Seamless RPC integration between source connectors and a dedicated sync controller, with built-in caching and async state tracking
- Node.js built-in module polyfilling via esbuild to enable NPM package reuse in Deno environments
- End-to-end type safety from API routes (Zod) to frontend forms (AJV) through shared schema definitions, creating a cohesive validation pipeline