Quickstart
The easiest way to get started with LFM models in 2 minutes is with Transformers. All LFM models are available on Hugging Face.
Just want to chat with LFM models?
Try Liquid Playground to interact with our base models directly in your browser—no installation required.
Installation​
Install Transformers and PyTorch:
pip install transformers torch
GPU is recommended for faster inference, but CPU works too.
Basic Usage​
Use the pipeline() interface for quick text generation:
from transformers import pipeline
# Load model
generator = pipeline("text-generation", "LiquidAI/LFM2-1.2B", device_map="auto")
# Generate
messages = [{"role": "user", "content": "What is machine learning?"}]
response = generator(messages, max_new_tokens=256)
print(response[0]["generated_text"][-1]["content"])
Try it in Google Colab →
Next Steps​
- Explore Models - Browse all available models and sizes
- Inference - Streaming, vision models, batching, and more
- Fine-tuning - Customize models for your use case
- Liquid AI Cookbook - End‑to‑end finetuning notebooks and project examples