Specifications
| Property | Value |
|---|---|
| Parameters | 8B (1.5B active) |
| Context Length | 32K tokens |
| Architecture | LFM2 (MoE) |
MoE Efficiency
8B quality, 1.5B inference cost
On-Device
Runs on phones and laptops
Tool Calling
Native function calling support
Quick Start
- Transformers
- llama.cpp
- vLLM
Install:
pip install "transformers>=5.0.0" torch accelerate
Download & Run:from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "LiquidAI/LFM2-8B-A1B"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
dtype="bfloat16",
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
inputs = tokenizer.apply_chat_template(
[{"role": "user", "content": "What is machine learning?"}],
add_generation_prompt=True,
return_tensors="pt",
tokenize=True,
return_dict=True,
).to(model.device)
output = model.generate(**inputs, do_sample=True, temperature=0.3, min_p=0.15, repetition_penalty=1.05, max_new_tokens=512)
input_length = inputs["input_ids"].shape[1]
response = tokenizer.decode(output[0][input_length:], skip_special_tokens=True)
print(response)