# Liquid Docs > Documentation for Liquid AI Foundation Models (LFM) ## Docs - [Finetuning Datasets](https://docs.liquid.ai/customization/finetuning-frameworks/datasets.md): Dataset formats for SFT, DPO, and VLM fine-tuning - [TRL](https://docs.liquid.ai/customization/finetuning-frameworks/trl.md): TRL (Transformer Reinforcement Learning) is a library for fine-tuning and aligning language models using methods like Supervised Fine-Tuning (SFT), Reward Modeling, and Direct Preference Optimization (DPO). - [Unsloth](https://docs.liquid.ai/customization/finetuning-frameworks/unsloth.md): Unsloth makes fine-tuning LLMs 2-5x faster with 70% less memory through optimized kernels and efficient memory management. - [Connect AI Tools](https://docs.liquid.ai/customization/getting-started/connect-ai-tools.md): Connect your AI coding tools to Liquid Docs via MCP for live, queryable access to documentation - [Customization Options](https://docs.liquid.ai/customization/getting-started/welcome.md): Fine-tune and customize Liquid Foundation Models for your specific use cases. - [Connect AI Tools](https://docs.liquid.ai/deployment/getting-started/connect-ai-tools.md): Connect your AI coding tools to Liquid Docs via MCP for live, queryable access to documentation - [Deployment Options](https://docs.liquid.ai/deployment/getting-started/welcome.md): Deploy Liquid Foundation Models on any platform — from mobile devices to GPU clusters. - [Baseten](https://docs.liquid.ai/deployment/gpu-inference/baseten.md): Baseten is an AI infrastructure platform for deploying and serving ML models with optimized inference, autoscaling, and multi-cloud support. - [Fal](https://docs.liquid.ai/deployment/gpu-inference/fal.md): Fal is a serverless generative media platform offering lightning-fast inference for AI models for image, video, and audio generation. - [Modal](https://docs.liquid.ai/deployment/gpu-inference/modal.md): Modal is a serverless cloud platform for running AI/ML workloads with instant autoscaling on GPUs and CPUs. - [SGLang](https://docs.liquid.ai/deployment/gpu-inference/sglang.md): SGLang is a fast serving framework for large language models. It features RadixAttention for efficient prefix caching, optimized CUDA kernels, and continuous batching for high-throughput, low-latency inference. - [Transformers](https://docs.liquid.ai/deployment/gpu-inference/transformers.md): Transformers is a library for inference and training of pretrained models. - [vLLM](https://docs.liquid.ai/deployment/gpu-inference/vllm.md): vLLM is a high-throughput and memory-efficient inference engine for LLMs. It supports efficient serving with PagedAttention, continuous batching, and optimized CUDA kernels. - [Advanced Features](https://docs.liquid.ai/deployment/on-device/android/advanced-features.md): API reference for constrained generation and function calling in the LEAP Android SDK - [AI Agent Usage Guide](https://docs.liquid.ai/deployment/on-device/android/ai-agent-usage-guide.md): Complete reference for using the LEAP Android SDK - [Quick Start Guide](https://docs.liquid.ai/deployment/on-device/android/android-quick-start-guide.md): Get up and running with the LEAP Android SDK in minutes. Install the SDK, load models, and start generating content. - [Cloud AI Comparison](https://docs.liquid.ai/deployment/on-device/android/cloud-ai-comparison.md): Compare LEAP Android SDK with cloud-based AI APIs like OpenAI - [Constrained Generation](https://docs.liquid.ai/deployment/on-device/android/constrained-generation.md): Generate structured JSON output with compile-time validation using constrained generation - [Conversation & Generation](https://docs.liquid.ai/deployment/on-device/android/conversation-generation.md): API reference for conversations, model runners, and generation in the LEAP Android SDK - [Function Calling](https://docs.liquid.ai/deployment/on-device/android/function-calling.md): Function calling allows the model to make requests to call some predefined functions provided by the app to interact with the environment. - [Messages & Content](https://docs.liquid.ai/deployment/on-device/android/messages-content.md): API reference for chat messages and content types in the LEAP Android SDK - [Model Loading](https://docs.liquid.ai/deployment/on-device/android/model-loading.md): API reference for loading models in the LEAP SDK - [Utilities](https://docs.liquid.ai/deployment/on-device/android/utilities.md): API reference for error handling, serialization, and utilities in the LEAP Android SDK - [Advanced Features](https://docs.liquid.ai/deployment/on-device/ios/advanced-features.md): API reference for constrained generation and function calling in the LEAP iOS SDK - [AI Agent Usage Guide](https://docs.liquid.ai/deployment/on-device/ios/ai-agent-usage-guide.md): Complete reference for using the LEAP iOS SDK - [Cloud AI Comparison](https://docs.liquid.ai/deployment/on-device/ios/cloud-ai-comparison.md): Compare LEAP iOS SDK with cloud-based AI APIs like OpenAI - [Constrained Generation](https://docs.liquid.ai/deployment/on-device/ios/constrained-generation.md): Generate structured JSON output with compile-time validation using constrained generation - [Conversation & Generation](https://docs.liquid.ai/deployment/on-device/ios/conversation-generation.md): API reference for conversations, model runners, and generation in the LEAP iOS SDK - [Function Calling](https://docs.liquid.ai/deployment/on-device/ios/function-calling.md): Function calling allows the model to make requests to call some predefined functions provided by the app to interact with the environment. - [Quick Start Guide](https://docs.liquid.ai/deployment/on-device/ios/ios-quick-start-guide.md): Get up and running with the LEAP iOS SDK in minutes. Install the SDK, load models, and start generating content. - [Messages & Content](https://docs.liquid.ai/deployment/on-device/ios/messages-content.md): API reference for chat messages and content types in the LEAP iOS SDK - [Model Loading](https://docs.liquid.ai/deployment/on-device/ios/model-loading.md): API reference for loading models in the LEAP iOS SDK - [Utilities](https://docs.liquid.ai/deployment/on-device/ios/utilities.md): API reference for error handling and utilities in the LEAP iOS SDK - [llama.cpp](https://docs.liquid.ai/deployment/on-device/llama-cpp.md): llama.cpp is a C++ library for efficient LLM inference with minimal dependencies. It's designed for CPU-first inference with cross-platform support. - [LM Studio](https://docs.liquid.ai/deployment/on-device/lm-studio.md): LM Studio is a desktop application for running LLMs locally with a graphical interface. - [MLX](https://docs.liquid.ai/deployment/on-device/mlx.md): MLX is Apple's machine learning framework optimized for Apple Silicon. It provides efficient inference on Mac devices with M-series chips (M1, M2, M3, M4) using Metal acceleration for GPU computing. - [Ollama](https://docs.liquid.ai/deployment/on-device/ollama.md): Ollama is a command-line tool for running LLMs locally with a simple interface. It provides easy model management and serving with an OpenAI-compatible API. - [ONNX](https://docs.liquid.ai/deployment/on-device/onnx.md): ONNX provides a platform-agnostic inference specification that allows running the model on device-specific runtimes that include CPU, GPU, NPU, and WebGPU. - [Authentication](https://docs.liquid.ai/deployment/tools/model-bundling/authentication.md): Authentication commands for the LEAP Bundle CLI - [Bundle Creation](https://docs.liquid.ai/deployment/tools/model-bundling/bundle-creation.md): Commands for creating and validating bundle requests in the LEAP Bundle CLI - [Bundle Management](https://docs.liquid.ai/deployment/tools/model-bundling/bundle-management.md): Commands for listing and canceling bundle requests in the LEAP Bundle CLI - [Changelog](https://docs.liquid.ai/deployment/tools/model-bundling/changelog.md) - [Configuration](https://docs.liquid.ai/deployment/tools/model-bundling/configuration.md): Configuration commands and file format for the LEAP Bundle CLI - [Data Privacy](https://docs.liquid.ai/deployment/tools/model-bundling/data-privacy.md): This page outlines how the LEAP Model Bundling Service handles your data, what information we collect and delete. - [Download](https://docs.liquid.ai/deployment/tools/model-bundling/download.md): Download commands for bundle requests and LEAP models in the LEAP Bundle CLI - [Quick Start](https://docs.liquid.ai/deployment/tools/model-bundling/quick-start.md): The Bundling Service helps users create and manage model bundles for Liquid Edge AI Platform (LEAP). Currently users interact with it through a command-line interface (CLI). - [Reference](https://docs.liquid.ai/deployment/tools/model-bundling/reference.md): Reference information for the LEAP Bundle CLI including limitations, error handling, and exit codes - [Build AI Agents with Koog Framework on Android](https://docs.liquid.ai/examples/android/leap-koog-agent.md) - [Generate Structured Recipes with Constrained Output](https://docs.liquid.ai/examples/android/recipe-generator-constrained-output.md) - [Product Slogan Generator with LeapSDK](https://docs.liquid.ai/examples/android/slogan-generator.md) - [Image Understanding with Vision Language Models](https://docs.liquid.ai/examples/android/vision-language-model-example.md) - [Web Content Summarizer for Android](https://docs.liquid.ai/examples/android/web-content-summarizer.md) - [Connect AI Tools](https://docs.liquid.ai/examples/connect-ai-tools.md): Connect your AI coding tools to Liquid Docs via MCP for live, queryable access to documentation - [Fine tuning LFM2-VL to identify car makers from images](https://docs.liquid.ai/examples/customize-models/car-maker-identification.md) - [Fine-tuning LFM for a local Home Assistant](https://docs.liquid.ai/examples/customize-models/home-assistant.md) - [Fine-Tune a Vision-Language Model on Satellite Imagery](https://docs.liquid.ai/examples/customize-models/satellite-vlm.md) - [Build a Wildfire Prevention System with a Compact VLM](https://docs.liquid.ai/examples/customize-models/wildfire-prevention.md) - [Examples Library](https://docs.liquid.ai/examples/index.md) - [Audio car cockpit demo](https://docs.liquid.ai/examples/laptop-examples/audio-car-cockpit.md) - [Audio transcription in real-time](https://docs.liquid.ai/examples/laptop-examples/audio-to-text-in-real-time.md) - [Browser control with GRPO reinforcement learning](https://docs.liquid.ai/examples/laptop-examples/browser-control.md) - [Flight search assistant with tool calling](https://docs.liquid.ai/examples/laptop-examples/flight-search-assistant.md) - [Invoice extractor tool](https://docs.liquid.ai/examples/laptop-examples/invoice-extractor-tool-with-liquid-nanos.md) - [Bidirectional English to Korean translation CLI](https://docs.liquid.ai/examples/laptop-examples/lfm2-english-to-korean.md) - [Meeting summarization CLI](https://docs.liquid.ai/examples/laptop-examples/meeting-summarization.md) - [LFM2.5-Audio browser demo with WebGPU](https://docs.liquid.ai/examples/web/audio-webgpu-demo.md) - [Hand & Voice Racer](https://docs.liquid.ai/examples/web/hand-voice-racer.md) - [Real-time video captioning with LFM2.5-VL-1.6B and WebGPU](https://docs.liquid.ai/examples/web/vl-webgpu-demo.md) - [Connect AI Tools](https://docs.liquid.ai/lfm/getting-started/connect-ai-tools.md): Connect your AI coding tools to Liquid Docs via MCP for live, queryable access to documentation - [Model License](https://docs.liquid.ai/lfm/getting-started/model-license.md): Understand how you can use, modify, and distribute Liquid Foundation Models under the LFM Open License v1.0. - [Welcome to LFM Docs!](https://docs.liquid.ai/lfm/getting-started/welcome.md): Liquid Foundation Models are a new generation of hybrid models developed by Liquid AI, designed for efficient deployment anywhere. - [Contributing to Docs](https://docs.liquid.ai/lfm/help/contributing.md): Guidelines for contributing to Liquid AI documentation. - [FAQs](https://docs.liquid.ai/lfm/help/faqs.md): Frequently asked questions about LFM models and deployment. - [Troubleshooting](https://docs.liquid.ai/lfm/help/troubleshooting.md): Common issues and solutions when working with LFM models. - [Chat Template](https://docs.liquid.ai/lfm/key-concepts/chat-template.md): The chat template defines how conversations are structured using special tokens and roles. LFM2 uses a ChatML-like chat template to structure conversations as follows: - [Prompting Guide](https://docs.liquid.ai/lfm/key-concepts/text-generation-and-prompting.md): This guide covers how to effectively use system prompts, user prompts, and assistant prompts with LFM2 models, along with an overview of sampling parameters and special prompting recipes for specific models. - [Tool Use](https://docs.liquid.ai/lfm/key-concepts/tool-use.md): LFM2.5 and LFM2 models support tool use (function calling), enabling models to interact with APIs, databases, and external services to provide accurate, up-to-date information. - [Audio Models](https://docs.liquid.ai/lfm/models/audio-models.md): Liquid's LFM audio models are among the smallest fully interleaved audio/text-in, audio/text-out models with a complete reasoning backbone — eliminating the need to combine separate TTS/ASR encoders with a standalone language model. - [Model Library](https://docs.liquid.ai/lfm/models/complete-library.md): Liquid Foundation Models (LFMs) are a new class of multimodal architectures built for fast inference and on-device deployment. Browse all available models and formats here. - [Liquid Nanos](https://docs.liquid.ai/lfm/models/liquid-nanos.md): A library of low-latency, task-specific models fine-tuned on Liquid's multimodal LFM base models. Nanos deliver high accuracy on narrow tasks while remaining small enough to deploy on-device or serve economically at high volume. - [Text Models](https://docs.liquid.ai/lfm/models/text-models.md): Liquid's LFM text models range from 350M to 8B parameters, delivering ultra-low-latency generation while matching the performance of much larger models. They come in both dense and MoE variants to deploy flexibly across different devices. - [Vision Models](https://docs.liquid.ai/lfm/models/vision-models.md): Liquid's LFM vision models pair our lightweight LFM text backbones with dynamic SigLIP2 image encoders, delivering fast multimodal inference on-device while matching larger VLMs in quality. ## OpenAPI Specs - [openapi](https://docs.liquid.ai/api-reference/openapi.json)