Skip to main content
← Back to Text Models LFM2-24B-A2B is Liquid AI’s largest Mixture-of-Experts model, combining 24B total parameters with only 2B active parameters per forward pass. This delivers the quality of much larger models with the efficiency needed for laptops and single-GPU deployments.

Specifications

PropertyValue
Parameters24B (2B active)
Context Length32K tokens
ArchitectureLFM2 (MoE)

MoE Efficiency

24B quality, 2B inference cost

Laptop-Ready

Runs on laptops and single GPUs

Tool Calling

Native function calling support

Quick Start