Welcome to LFM Docs! 👋
LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment.
Why LFM2?
Built on a new hybrid architecture, LFM2 sets a new standard in terms of quality, speed, and memory efficiency.
⚡ 3x faster training - New hybrid architecture accelerates training and inference
🏆 State-of-the-art quality - Outperforms similar-sized models on benchmarks
💾 Memory efficient - Optimized for resource-constrained environments
🌐 Deploy anywhere - Compatible with major inference frameworks and platforms
Learn more about the architecture →
📚 Liquid AI Cookbook - Explore examples, tutorials, and end-to-end applications built with LFM2 models and the LEAP SDK. Find fine-tuning notebooks, edge deployment examples, and community-built apps to get started quickly.
Model Families
💬 Text Models
General-purpose language models from 350M to 8B parameters
Learn more →👁️ Vision-Language
Multimodal models for image understanding and scene analysis
Learn more →🎵 Audio
Speech and audio processing models for ASR, TTS, and chat
Learn more →🎯 Task-Specific
Specialized models for extraction, translation, RAG, and tool use
Learn more →