Use this file to discover all available pages before exploring further.
Latest version: v0.10.6The LEAP SDK is a Kotlin Multiplatform library: the same ModelRunner / Conversation / MessageResponse API runs on every supported target. The code differs only in language (Swift vs. Kotlin) and packaging (SPM, Gradle, or Kotlin/Native plugin) β the call shapes are identical.
Migrating from 0.9.x? v0.10.0 unifies the SDK into a single Kotlin Multiplatform distribution published from Liquid4All/leap-sdk. The standalone Liquid4All/leap-ios repo is no longer the source-of-truth. See the SDK changelog for the transition story and drop-in replacements for legacy Leap.load(...) / LiquidEngine(...) call sites.
iOS 17.0+ or macOS 15.0+ (Mac Catalyst 17.0+ also supported).
A physical iPhone or iPad with at least 3 GB RAM for best performance. The simulator works for development but runs models much slower.
v0.10.0 raises the minimum iOS deployment target from 15.0 to 17.0 and macOS from 12.0 to 15.0. Apps targeting older OSes need to pin to 0.9.x or bump their deployment target before upgrading.
Kotlin Android Plugin 2.3.0+ and Android Gradle Plugin 8.13.0+:
plugins { id("com.android.application") version "8.13.2" apply false id("com.android.library") version "8.13.2" apply false id("org.jetbrains.kotlin.android") version "2.3.20" apply false}
A working Android device that supports arm64-v8a with developer mode enabled and 3 GB+ of RAM.
The SDK may crash on loading model bundles in emulators. Always test on a physical device.
JDK 11+ (Hotspot or any standard JVM).
Kotlin 2.3.0+ if youβre writing Kotlin (Java consumers work too).
Supported hosts: macOS ARM64, Linux x86_64, Linux aarch64, Windows x86_64, Windows aarch64.
The leap-sdk JAR bundles the JNI binaries for every supported OS/arch β no extra setup needed.
Kotlin 2.3.20+ and Gradle 8.x for the Kotlin/Native build.
Linux runtime: glibc 2.34+ (Ubuntu 22.04, Debian 12, RHEL 9, or newer).
Windows runtime: Windows 10+.
Versions 0.10.0, 0.10.1, and 0.10.2 cannot link a working Kotlin/Native executable due to Maven Central / cinterop issues. Pin to 0.10.5+ β Maven Central is immutable per GAV, so the older versions cannot be republished. See the changelog for the full story.
The Leap SDK ships exclusively through Swift Package Manager in v0.10.0. CocoaPods support has been removed.
In Xcode choose File β Add Package Dependencies.
Enter https://github.com/Liquid4All/leap-sdk.git.
Select the 0.10.6 release (or newer).
Add the products you need to your app target.
The package vends five products. Most apps only need one or two:
Product
What it provides
Re-exports
LeapSDK
Core inference + conversation API. Use this when you only need foreground manifest loads via LeapDownloader.loadModel(...).
β
LeapModelDownloader
The Swift ModelDownloader class with URLSession-backed loadModel(...) / downloadModel(...) and background-session support. Re-exports every LeapSDK Kotlin type, so a single import LeapModelDownloader reaches Conversation, ModelRunner, ChatMessage, Leap, the convenience extensions, and so on.
every LeapSDK type
LeapOpenAIClient
OpenAI-compatible cloud chat client
β
LeapUI
Voice assistant widget (SwiftUI/AppKit/Compose)
LeapSDK
LeapSDKMacros
@Generatable / @Guide macros
swift-syntax
Pick exactly one of LeapSDK or LeapModelDownloader per target.LeapModelDownloader is the recommended choice β its ModelDownloader.loadModel(...) covers everything LeapDownloader.loadModel(...) does and adds background-friendly transfers, and LeapSDKβs types are re-exported through the same import LeapModelDownloader statement. Drop the LeapSDK dependency from any target that already pulls in LeapModelDownloader. Add LeapSDKMacros if you use @Generatable constrained generation. LeapOpenAIClient and LeapUI are independent opt-ins.
Dual-import build-time guard (v0.10.6+).LeapModelDownloaderβs umbrella header carries a __has_include(<LeapSDK/LeapSDK.h>) && !defined(LEAP_DUAL_IMPORT_ALLOW) check that fires #error at preprocessing time if both LeapSDK and LeapModelDownloader are linked into the same target. The K/N export creates a distinct Swift type per Kotlin protocol per framework, so LeapSDK.Conversation and LeapModelDownloader.Conversation are different types and every protocol reference triggers βambiguous for type lookup.βOpt out only for the legitimate LeapUI + LeapModelDownloader combination (LeapUI transitively bundles LeapSDK and thereβs no source-level workaround): add LEAP_DUAL_IMPORT_ALLOW=1 to OTHER_CFLAGS for the affected target, and qualify any ambiguous Swift type with the source module β e.g. LeapModelDownloader.Conversation β or stick to a single import per file.
Framework type (v0.10.6+).LeapModelDownloader.xcframework is now a dynamic framework (was static in 0.10.5) and bundles the three inference engine dylibs under Frameworks/. SPM applies Embed & Sign automatically; manual Xcode integrators must select βEmbed & Signβ on the framework instead of βDo Not Embedβ.
Pin to explicit binary XCFrameworks
For explicit pinning, declare each framework as a .binaryTarget in your Package.swift. The XCFramework assets live on the Liquid4All/leap-sdk v0.10.6 release page β copy the SHA-256 values from there.
The constrained-generation macros (@Generatable, @Guide) are Swift macros, not XCFrameworks β they ship as the LeapSDKMacros source target inside the SPM package and cannot be installed as a .binaryTarget. If you need them, use the standard SPM package URL above (or add the LeapSDKMacros source target separately on top of your binary targets).
Note that the binary target name is LeapUi (lowercase i) β import LeapUi in Swift sources matches the binary-target module name, even though the SPM library product is LeapUI.
Do not add ai.liquid.leap:leap-model-downloader from a non-Android JVM project β that module is Android-only (WorkManager + foreground service). Use LeapDownloader from the core leap-sdk instead.
Maven projects: use the leap-sdk-jvm artifact ID (KMP libraries require the -jvm suffix when consumed from pure Maven).
Apply the ai.liquid.leap.nativelibs plugin β it installs the engine .so/.dll files next to your linked executable and wires the linker -L<dir> flag automatically.
The recommended path is manifest-based loading. On every platform, the platform downloaderβs loadModel(...) downloads (if needed) and loads in one call β LeapModelDownloader.loadModel(...) on iOS / macOS / Android, LeapDownloader.loadModel(...) on JVM and Linux / Windows Kotlin/Native. All paths fetch from the LEAP Model Library on first use and load from cache thereafter.
Swift (iOS / macOS)
Kotlin (Android)
Kotlin (JVM / native)
import LeapModelDownloaderimport Combine@MainActorfinal class ChatViewModel: ObservableObject { @Published var isLoading = false @Published var conversation: Conversation? private let modelsDir: String = { let caches = FileManager.default.urls(for: .cachesDirectory, in: .userDomainMask).first!.path return (caches as NSString).appendingPathComponent("leap_models") }() private lazy var downloader = ModelDownloader( config: LeapDownloaderConfig(saveDir: modelsDir) // For background transfers, pass: // sessionConfiguration: .background(withIdentifier: "com.myapp.leap.downloads") ) private var modelRunner: ModelRunner? private var generationTask: Task<Void, Never>? func loadModel() async { isLoading = true defer { isLoading = false } do { let runner = try await downloader.loadModel( modelName: "LFM2-1.2B", quantizationType: "Q5_K_M" ) { fraction, _ in // fraction: Double (0...1) Β· bytesPerSecond: Int64 } conversation = runner.createConversation( systemPrompt: "You are a helpful travel assistant." ) self.modelRunner = runner } catch { print("Failed to load model: \(error)") } }}
ModelDownloader.loadModel(...) runs the file transfer through URLSession (so it inherits background-session support when you pass a sessionConfiguration) and then loads the on-disk files in place β no need to pair the downloader with a separate loader. If you only need foreground transfers and cross-platform Swift/Kotlin code, LeapDownloader.loadModel(modelName:, quantizationType:) has the same shape minus the URLSession integration; it ships in the same LeapModelDownloader SPM product, so no extra import is needed. See Model Loading.
import android.app.Applicationimport androidx.lifecycle.AndroidViewModelimport androidx.lifecycle.viewModelScopeimport ai.liquid.leap.Conversationimport ai.liquid.leap.ModelRunnerimport ai.liquid.leap.model_downloader.LeapModelDownloaderimport ai.liquid.leap.model_downloader.LeapModelDownloaderNotificationConfigimport kotlinx.coroutines.Dispatchersimport kotlinx.coroutines.flow.MutableStateFlowimport kotlinx.coroutines.flow.StateFlowimport kotlinx.coroutines.flow.asStateFlowimport kotlinx.coroutines.launchimport kotlinx.coroutines.runBlockingclass ChatViewModel(application: Application) : AndroidViewModel(application) { private val modelDownloader = LeapModelDownloader( application, notificationConfig = LeapModelDownloaderNotificationConfig.build { notificationTitleDownloading = "Downloading AI model..." notificationTitleDownloaded = "Model ready!" notificationContentDownloading = "Please wait while the model downloads" } ) private var modelRunner: ModelRunner? = null private var conversation: Conversation? = null private val _isLoading = MutableStateFlow(false) val isLoading: StateFlow<Boolean> = _isLoading.asStateFlow() private val _downloadProgress = MutableStateFlow(0f) val downloadProgress: StateFlow<Float> = _downloadProgress.asStateFlow() fun loadModel() { viewModelScope.launch { _isLoading.value = true try { modelRunner = modelDownloader.loadModel( modelName = "LFM2-1.2B", quantizationType = "Q5_K_M", progress = { _downloadProgress.value = it.progress } ) conversation = modelRunner?.createConversation() } finally { _isLoading.value = false } } } override fun onCleared() { super.onCleared() runBlocking(Dispatchers.IO) { modelRunner?.unload() } }}
import ai.liquid.leap.LeapDownloaderimport ai.liquid.leap.LeapDownloaderConfigimport ai.liquid.leap.message.ChatMessageimport ai.liquid.leap.message.MessageResponseimport kotlinx.coroutines.runBlockingimport java.nio.file.Pathsfun main() = runBlocking { // Linux/macOS: ~/.cache/leap Β· Windows: %LOCALAPPDATA%\leap val cacheDir = Paths.get(System.getProperty("user.home"), ".cache", "leap").toString() val downloader = LeapDownloader(config = LeapDownloaderConfig(saveDir = cacheDir)) val runner = downloader.loadModel( modelName = "LFM2-1.2B", quantizationType = "Q5_K_M", progress = { p -> println("Downloading: ${(p.progress * 100).toInt()}%") }, ) val conversation = runner.createConversation(systemPrompt = "You are a helpful assistant.") conversation.generateResponse(ChatMessage.user("Hello!")).collect { resp -> when (resp) { is MessageResponse.Chunk -> print(resp.text) is MessageResponse.Complete -> println("\n[done]") else -> {} } } runner.unload()}
LeapDownloader is the cross-platform downloader β same shape on JVM, Linux native, Windows native, and macOS Kotlin/Native. There is no Android Context to pass; provide a writable cache directory via LeapDownloaderConfig(saveDir = ...).
When you already have a model file on disk β shipped as an app asset, adb push-ed for development, or downloaded by your own pipeline β use loadSimpleModel(model: ModelSource(...)) to skip the LEAP Model Library lookup entirely.
For a vision-capable model, pass mmprojPath. For an audio-capable model, pass audioDecoderPath (and optionally audioTokenizerPath).
val runner = downloader.loadSimpleModel( model = ModelSource( modelPath = "/path/to/lfm2-1_2b-q4_k_m.gguf", modelName = "LFM2-1.2B-Instruct", quantizationId = "Q4_K_M" ))
Pass mmprojPath for vision, audioDecoderPath (+ optional audioTokenizerPath) for audio. See Model Loading for the full ModelSource reference. Note: ModelSource uses quantizationId (the loader parameters use quantizationType).
If the loaded model is multimodal (and its companion files were detected), you can attach a non-text part β an image, a WAV blob, or raw PCM samples β alongside the text in a ChatMessage.
Multimodality is model-specific. Most multimodal models we ship are text + one other modality: text + vision (the VLM family) or text + audio (the audio family) β not both in the same checkpoint. Send .image(...) parts only to a vision-capable model, and .audio(...) / .fromFloatSamples(...) parts only to an audio-capable model. Mixing modalities a model wasnβt trained on will either fail to load the companion file or produce nonsense. Check the modelβs Hugging Face card before wiring up a non-text input path.
Swift (iOS / macOS)
Kotlin (all platforms)
// Text + image (vision-capable model)let imageMessage = ChatMessage( role: .user, content: [.text("Describe what you see."), .image(jpegData)])// Text + WAV audio (audio-capable model)let wavMessage = ChatMessage( role: .user, content: [.text("Transcribe and summarize this clip."), .audio(wavData)])// Text + raw PCM samples (audio-capable model)let pcmMessage = ChatMessage( role: .user, content: [ .text("Give feedback on my pronunciation."), ChatMessageContent.fromFloatSamples(samples, sampleRate: 16000) ])
// Text + image (vision-capable model)val imageMessage = ChatMessage.user( content = listOf( ChatMessageContent.Text("Describe what you see."), ChatMessageContent.Image(jpegBytes) ))// Text + WAV audio (audio-capable model)val wavMessage = ChatMessage.user( content = listOf( ChatMessageContent.Text("Transcribe and summarize this clip."), ChatMessageContent.Audio(wavBytes) ))// Text + raw PCM samples (audio-capable model)val pcmMessage = ChatMessage.user( content = listOf( ChatMessageContent.Text("Give feedback on my pronunciation."), ChatMessageContent.AudioPcmF32(samples, sampleRate = 16000) ))