Skip to main content
← Back to Text Models LFM2.5-1.2B-Base is the pre-trained foundation model for the LFM2.5 series. Ideal for fine-tuning on custom datasets or building specialized checkpoints. Not instruction-tuned—use LFM2.5-1.2B-Instruct for chat applications.

Specifications

PropertyValue
Parameters1.2B
Context Length32K tokens
ArchitectureLFM2.5 (Dense)

Fine-tuning

TRL compatible (SFT, DPO, GRPO)

Custom Training

Build domain-specific models

32K Context

Extended context for long documents

Quick Start

Install:
pip install transformers torch
Download & Run:
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("LiquidAI/LFM2.5-1.2B-Base", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("LiquidAI/LFM2.5-1.2B-Base")

# Base model uses raw text completion (not chat template)
inputs = tokenizer("The future of AI is", return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))