Skip to main content

Quick Start Guide

Prerequisites

Make sure you have:

  • Xcode 15.0 or later with Swift 5.9.
  • An iOS project targeting iOS 15.0+ (macOS 12.0+ or Mac Catalyst 15.0+ are also supported).
  • A physical iPhone or iPad with at least 3 GB RAM for best performance. The simulator works for development but runs models much slower.
iOS Deployment Target: 15.0
macOS Deployment Target: 12.0
warning

Always test on a real device before shipping. Simulator performance is not representative of production behaviour.

Install the SDK

Choose your preferred installation method:

  1. In Xcode choose File -> Add Package Dependencies.
  2. Enter https://github.com/Liquid4All/leap-ios.git.
  3. Select the 0.7.0 release (or newer).
  4. Add the LeapSDK product to your app target.
  5. (Optional) Add LeapModelDownloader if you plan to download model bundles at runtime.
info

The constrained-generation macros (@Generatable, @Guide) ship inside the LeapSDK product. No additional package is required.

Getting and Loading Models

The SDK supports two methods for loading models.

  • GGUF manifests (recommended method for new projects due to superior inference performance and better default generation parameters)
  • Executorch bundles (legacy)

The LEAP Edge SDK supports directly downloading LEAP models in GGUF format. Given the model name and quantization method (which you can find in the LEAP Model Library), the SDK will automatically download the necessary GGUF files along with generation parameters for optimal performance.

import LeapSDK
import LeapModelDownloader
import Combine

@MainActor
final class ChatViewModel: ObservableObject {
@Published var isLoading = false
@Published var conversation: Conversation?

private var modelRunner: ModelRunner?
private var generationTask: Task<Void, Never>?

func loadModel() async {

isLoading = true
defer { isLoading = false }

do {
// LEAP will download the model if needed or reuse a cached copy.
let modelRunner = try await Leap.load(model: "LFM2-1.2B", quantization: "Q5_K_M", downloadProgressHandler: { progress, speed in
// progress: Double (0...1)
// speed: bytes per second
})

conversation = modelRunner.createConversation(systemPrompt: "You are a helpful travel assistant.")
self.modelRunner = modelRunner

} catch {
print("Failed to load model: \(error)")
}
}

func send(_ text: String) {
guard let conversation else { return }

generationTask?.cancel()

let userMessage = ChatMessage(role: .user, content: [.text(text)])

generationTask = Task { [weak self] in
do {
for try await response in conversation.generateResponse(
message: userMessage,
generationOptions: GenerationOptions(temperature: 0.7)
) {
self?.handle(response)
}
} catch {
print("Generation failed: \(error)")
}
}
}

func stopGeneration() {
generationTask?.cancel()
}

@MainActor
private func handle(_ response: MessageResponse) {
switch response {
case .chunk(let delta):
print(delta, terminator: "") // Update UI binding here
case .reasoningChunk(let thought):
print("Reasoning:", thought)
case .audioSample(let samples, let sr):
print("Received audio samples \(samples.count) at sample rate \(sr)")
case .functionCall(let calls):
print("Requested calls: \(calls)")
case .complete(let completion):
if let stats = completion.stats {
print("Finished with \(stats.totalTokens) tokens")
}
let text = completion.message.content.compactMap { part -> String? in
if case .text(let value) = part { return value }
return nil
}.joined()
print("Final response:", text)
// completion.message.content may also include `.audio` entries you can persist or replay
}
}
}

Stream responses

send(_:) (shown above) launches a Task that consumes the AsyncThrowingStream returned by Conversation.generateResponse. Each MessageResponse case maps to UI updates, tool execution, or completion metadata. Cancel the task manually (for example via stopGeneration()) to interrupt generation early. You can also observe conversation.isGenerating to disable UI controls while a request is in flight.

Send images and audio (optional)

When the loaded model ships with multimodal weights (and companion files were detected), you can mix text, image, and audio content in the same message:

let message = ChatMessage(
role: .user,
content: [
.text("Describe what you see."),
.image(jpegData) // Data containing JPEG bytes
]
)

let audioMessage = ChatMessage(
role: .user,
content: [
.text("Transcribe and summarize this clip."),
.audio(wavData) // Data containing WAV bytes
]
)

let pcmMessage = ChatMessage(
role: .user,
content: [
.text("Give feedback on my pronunciation."),
ChatMessageContent.fromFloatSamples(samples, sampleRate: 16000)
]
)

Add tool results back to the history

let toolMessage = ChatMessage(
role: .tool,
content: [
.text("{\"temperature\":72,\"conditions\":\"sunny\"}"),
.audio(toolAudioData) // Optional: return audio bytes from your tool
]
)

guard let current = conversation else { return }
let updatedHistory = current.history + [toolMessage]
conversation = current.modelRunner.createConversationFromHistory(
history: updatedHistory
)

Next steps

You now have a project that loads an on-device model, streams responses, and is ready for advanced features like structured output and tool use.