LFM2.5 models are fully supported by Unsloth. For comprehensive guides and tutorials, see the official Unsloth LFM2.5 documentation. Different training methods require specific dataset formats. See Finetuning Datasets for format requirements for SFT and GRPO.Documentation Index
Fetch the complete documentation index at: https://docs.liquid.ai/llms.txt
Use this file to discover all available pages before exploring further.
Notebooks
Get started quickly with these ready-to-run Colab notebooks:SFT with LoRA
Supervised fine-tuning with parameter-efficient LoRA adapters.
GRPO with LoRA
Reinforcement learning with Group Relative Policy Optimization.
CPT Text Completion
Continued pre-training for text completion tasks.
CPT Translation
Continued pre-training for translation tasks.
Quick Start
Key Features
load_in_4bit=True: Enable QLoRA to reduce memory by ~4x with minimal quality lossuse_gradient_checkpointing="unsloth": Optimized checkpointing that’s 2x faster than defaultFastLanguageModel.for_inference(): Switch to optimized inference mode after training
Tips
max_seq_length: Set to your expected maximum sequence length; Unsloth pre-allocates memory for efficiency- Target modules: Include MLP layers (
gate_proj,up_proj,down_proj) for better quality on smaller models - Batch size: Unsloth’s optimizations allow larger batch sizes; experiment to maximize GPU utilization