Meetily is a privacy-first AI meeting assistant designed for professionals and enterprises who need to capture, transcribe, and summarize meetings without sending data to the cloud. Built in Rust with Tauri for cross-platform desktop support, it leverages open-source models like Whisper and Parakeet for local speech-to-text transcription, and Ollama or other OpenAI-compatible endpoints for AI-powered summarization. Unlike cloud-based alternatives that risk data breaches and compliance violations, Meetily ensures complete data sovereignty by processing all audio, transcripts, and summaries entirely on the user’s device. This makes it ideal for legal, healthcare, defense, and compliance-sensitive industries where data control is non-negotiable.
The application supports macOS, Windows, and Linux with GPU acceleration via Metal (Apple Silicon), CUDA (NVIDIA), and Vulkan (AMD/Intel). It integrates microphone and system audio capture with intelligent ducking, supports custom AI endpoints for organizations with proprietary models, and provides a clean, responsive interface for reviewing and exporting meeting notes—all while maintaining zero external data transmission.
What You Get
- Local Transcription - Uses Whisper and Parakeet models to transcribe audio in real-time without sending data to the cloud; supports both CPU and GPU acceleration on macOS, Windows, and Linux.
- AI-Powered Summaries - Generates meeting summaries using Ollama (local LLMs), Claude, Groq, OpenRouter, or any OpenAI-compatible API endpoint—no cloud dependency required.
- Multi-Platform Support - Native desktop apps for macOS (.dmg), Windows (.exe), and Linux (build from source) with no virtualization or container dependencies.
- Advanced Audio Capture - Simultaneously records microphone and system audio with intelligent ducking to prevent clipping and ensure clean input for transcription.
- Custom OpenAI Endpoint Support - Configure any OpenAI-compatible API (e.g., local Llama.cpp, vLLM, or self-hosted models) for summarization without relying on third-party services.
- GPU Acceleration - Automatic hardware acceleration via Metal (macOS), CUDA (Windows/Linux NVIDIA), and Vulkan (AMD/Intel) for faster transcription without manual configuration.
- Offline-First Design - Entirely functional without internet connectivity; recordings, transcripts, and summaries are stored locally on the device.
Common Use Cases
- Building a compliant legal or healthcare meeting archive - Law firms and medical providers use Meetily to transcribe client consultations without violating HIPAA or GDPR, since no data leaves the local machine.
- Enterprise teams needing secure internal meetings - Companies with strict data governance policies use Meetily to record board meetings, R&D sessions, or M&A calls with full audit control over data storage.
- Privacy-conscious professionals avoiding SaaS risks - Freelancers and consultants avoid cloud-based tools like Otter.ai or Fireflies due to data breach concerns; Meetily eliminates vendor lock-in and third-party exposure.
- DevOps teams managing local AI infrastructure - Teams with self-hosted Ollama or Llama.cpp instances use Meetily to route summaries through their existing AI pipelines without re-architecting workflows.
Under The Hood
This project is a full-stack application designed to provide real-time transcription and meeting summarization capabilities, integrating AI-powered processing with a responsive user interface. It combines Rust-based backend services with a TypeScript frontend and supports both web and desktop deployment through Tauri.
Architecture
This system adopts a modular monolithic architecture with distinct frontend and backend components.
- The backend is structured around a Python-based REST-like API, offering well-defined data models and database interactions for local and cloud workflows.
- The frontend is built using Next.js and Tauri, emphasizing reusable UI components and React-based state management.
- Communication between frontend and backend is facilitated through API endpoints and shared data models, enabling real-time transcript handling.
- The architecture supports extensibility via modular design elements like model managers and settings panels, accommodating GPU acceleration and flexible configurations.
Tech Stack
The project utilizes a modern tech stack spanning backend, frontend, and cross-platform development.
- The backend leverages Rust for performance-critical tasks, while the frontend is built with TypeScript and Next.js.
- Tauri is used for native desktop packaging, complemented by a rich ecosystem of React libraries and AI/ML tools.
- Development tools include Tauri CLI, PNPM for dependency management, and shell scripts for build automation.
- Linting is handled via ESLint and PostCSS, although explicit testing infrastructure remains limited.
Code Quality
The codebase presents a mixed quality profile with some structured components and areas of inconsistency.
- Testing coverage is minimal, with only basic linting and configuration in place for code quality checks.
- Error handling is present but varies across different parts of the system and languages.
- Code consistency is moderate, with some adherence to conventions but evidence of technical debt in script organization.
- Documentation and structured component usage are not consistently applied across the codebase.
What Makes It Unique
The project introduces a hybrid architecture that merges high-performance backend services with a responsive frontend.
- It uniquely integrates real-time audio transcription with AI-powered summarization, offering an end-to-end solution for meeting management.
- The system supports multiple transcription models (Whisper, Parakeet) and allows flexible configuration for extensibility.
- The combination of Rust backend and Tauri-based desktop application provides a rare blend of performance and cross-platform accessibility.