Welcome to LFM Docs! 👋
LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment.
Why LFM2?
Built on a new hybrid architecture, LFM2 sets a new standard in terms of quality, speed, and memory efficiency.
⚡ 3x faster training - New hybrid architecture accelerates training and inference
🏆 State-of-the-art quality - Outperforms similar-sized models on benchmarks
💾 Memory efficient - Optimized for resource-constrained environments
🌐 Deploy anywhere - Compatible with major inference frameworks and platforms
Learn more about the architecture →
Get Started
🚀 Deploy your first LFM in minutes
Get started quickly with step-by-step deployment guides
Get started →🔍 Explore models
Browse our collection of language models and their capabilities
Learn more →📖 Inference guides
Learn how to run models for different use cases and platforms
Learn more →🛠️ Fine tuning guides
Customize models for your specific requirements and datasets
Learn more →Model Families
💬 Text Models
General-purpose language models from 350M to 8B parameters
Learn more →👁️ Vision-Language
Multimodal models for image understanding and scene analysis
Learn more →🎵 Audio
Speech and audio processing models for ASR, TTS, and chat
Learn more →🎯 Task-Specific
Specialized models for extraction, translation, RAG, and tool use
Learn more →