Skip to main content
<|startoftext|><|im_start|>system
You are a helpful assistant trained by Liquid AI.<|im_end|>
<|im_start|>user
What is C. elegans?<|im_end|>
<|im_start|>assistant
It's a tiny nematode that lives in temperate soil environments.<|im_end|>
Conversations are formatted using special tokens:
  • <|startoftext|> — Start of the conversation.
  • <|im_start|> — Start of the message. Always followed by the role name (system, user, assistant, or tool) and a line break.
  • <|im_end|> — End end of the message.
LFM2 supports four conversation roles:
  • system — (Optional) Defines who the assistant is and how it should respond.
  • user — Messages from the user containing questions and instructions.
  • assistant — Responses from the model.
  • tool — Results from tool/function execution. Used for tool use workflows.
The complete chat template definition can be found in the chat_template.jinja file in each model’s Hugging Face repository.

Text Models

We recommend storing your conversation as a list of dictionaries as follows:
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is machine learning?"}
]
You can then apply LFM2’s chat template using the .apply_chat_template() method from Transformers:
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("LiquidAI/LFM2.5-1.2B-Instruct")

# Apply chat template
prompt = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
print(prompt)
Output:
<|startoftext|><|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
What is machine learning?<|im_end|>
<|im_start|>assistant

Vision Models

LFM2-VL models follow the same chat template with additional support for images. When formatted, images are represented with a sentinel token (<image>), which is automatically replaced with image tokens by the processor. When creating conversations for vision models, use a structured format with content as a list containing image and text entries:
image = load_image("path/to/image.jpg")  # or use PIL Image

messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": image},  # PIL Image or loaded image
            {"type": "text", "text": "What is in this image?"},
        ],
    },
]
You can then apply the chat template using the processor’s .apply_chat_template() method:
from transformers import AutoProcessor
from transformers.image_utils import load_image

processor = AutoProcessor.from_pretrained("LiquidAI/LFM2.5-VL-1.6B")

# Apply chat template
prompt = processor.apply_chat_template(
    conversation,
    tokenize=False,
    add_generation_prompt=True
)
print(prompt)
Output:
<|startoftext|><|im_start|>system
You are a helpful multimodal assistant by Liquid AI.<|im_end|>
<|im_start|>user
<image>What is in this image?<|im_end|>
<|im_start|>assistant