Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.liquid.ai/llms.txt

Use this file to discover all available pages before exploring further.

ChatMessage and ChatMessageContent mirror the OpenAI chat-completions message schema. The same fields exist on iOS / macOS (struct ChatMessage, enum ChatMessageContent) and the Kotlin platforms (data class ChatMessage, sealed interface ChatMessageContent).

ChatMessage

public struct ChatMessage {
  public var role: ChatMessageRole
  public var content: [ChatMessageContent]
  public var reasoningContent: String?
  public var functionCalls: [LeapFunctionCall]?

  public init(
    role: ChatMessageRole,
    content: [ChatMessageContent],
    reasoningContent: String? = nil,
    functionCalls: [LeapFunctionCall]? = nil
  )

  public init(from json: [String: Any]) throws
}

public enum ChatMessageRole: String {
  case user, system, assistant, tool
}

Fields

  • role β€” the speaker (user, system, assistant, or tool). Use tool when appending function-call results back into the history.
  • content β€” ordered fragments. Supported part types: Text, Image (JPEG bytes), Audio (WAV bytes), and on Kotlin AudioPcmF32 for raw float samples.
  • reasoningContent β€” text emitted by reasoning models inside <think> / </think> tags. null for non-reasoning responses.
  • functionCalls β€” calls returned by MessageResponse.functionCalls on the previous turn, included when appending tool-call results to history.

Serialization

Both platforms expose round-trip JSON helpers compatible with OpenAI’s ChatCompletionRequestMessage.
ChatMessage(from: [String: Any]) constructs a message from an OpenAI-style payload. Throws LeapSerializationError on unrecognized shapes.

ChatMessageContent

public enum ChatMessageContent {
  case text(String)
  case image(Data)   // JPEG bytes
  case audio(Data)   // WAV bytes

  public init(from json: [String: Any]) throws
}
Helper initializers simplify interop with platform-native buffers:
  • ChatMessageContent.fromUIImage(image, compressionQuality:) β€” UIKit
  • ChatMessageContent.fromNSImage(image, compressionQuality:) β€” AppKit
  • ChatMessageContent.fromWAVData(data) β€” pass-through validator
  • ChatMessageContent.fromFloatSamples(samples, sampleRate:, channelCount:) β€” wrap raw float32 PCM into a WAV blob
On the wire, image parts are encoded as OpenAI-style image_url payloads and audio parts as input_audio arrays with Base64 data.
  • Text β€” plain text fragment.
  • Image β€” JPEG-encoded image bytes. Only vision-capable models can interpret image parts.
  • Audio β€” WAV-encoded audio bytes (see audio format requirements below).
  • AudioPcmF32 (Kotlin) / fromFloatSamples(...) (Swift) β€” raw float32 mono PCM in memory. Avoids re-encoding when you already have samples.

Audio format requirements

The LEAP inference engine expects WAV-encoded audio with these specifications:
PropertyRequired valueNotes
ContainerWAV (RIFF)Only WAV is supported
Sample rate16000 Hz recommendedOther rates auto-resampled to 16 kHz
EncodingPCMFloat32, Int16, Int24, or Int32
ChannelsMono (1)Stereo is rejected
Byte orderLittle-endianStandard WAV
Supported PCM encodings
  • Float32 β€” 32-bit floating point, normalized to [-1.0, 1.0]
  • Int16 β€” 16-bit signed integer (recommended)
  • Int24 β€” 24-bit signed integer
  • Int32 β€” 32-bit signed integer
The engine only accepts WAV. M4A, MP3, AAC, OGG, and other compressed formats are rejected. Convert to WAV before sending.
Mono required. Stereo or multi-channel WAVs are rejected with an error. Downmix to mono first.
Automatic resampling. The engine resamples to 16 kHz when needed, but providing 16 kHz audio directly avoids the resampling overhead. For best quality, record at 16 kHz mono.

Creating audio content

From a WAV file

let wavURL = Bundle.main.url(forResource: "audio", withExtension: "wav")!
let wavData = try Data(contentsOf: wavURL)

let message = ChatMessage(
    role: .user,
    content: [
        .text("What is being said in this audio?"),
        .audio(wavData)
    ]
)

From raw PCM samples

// Float samples normalized to [-1.0, 1.0]
let samples: [Float] = [0.1, 0.2, 0.15, -0.3 /* ... */]

let audioContent = ChatMessageContent.fromFloatSamples(
    samples,
    sampleRate: 16000,
    channelCount: 1
)

let message = ChatMessage(
    role: .user,
    content: [.text("Transcribe this audio"), audioContent]
)

Recording from the microphone

Configure AVAudioRecorder with WAV-compatible settings:
import AVFoundation

let audioURL = FileManager.default.temporaryDirectory
    .appendingPathComponent("recording.wav")

let settings: [String: Any] = [
    AVFormatIDKey: kAudioFormatLinearPCM,
    AVSampleRateKey: 16000.0,        // 16 kHz
    AVNumberOfChannelsKey: 1,        // Mono
    AVLinearPCMBitDepthKey: 16,      // 16-bit
    AVLinearPCMIsFloatKey: false,
    AVLinearPCMIsBigEndianKey: false
]

let recorder = try AVAudioRecorder(url: audioURL, settings: settings)
recorder.record()
// ...
recorder.stop()

let wavData = try Data(contentsOf: audioURL)
let audioContent: ChatMessageContent = .audio(wavData)

Audio duration

  • Minimum β€” at least 1 second of audio for reliable speech recognition.
  • Maximum β€” bounded by the model’s context window (typically several minutes).
  • Silence β€” trim excessive silence from the start and end for better results.

Audio output from models

Audio-capable models like LFM2.5-Audio-1.5B emit float32 PCM frames via MessageResponse.AudioSample. Output sample rate is typically 24 kHz (vs. 16 kHz for input).
for try await response in conversation.generateResponse(message: userMessage) {
    if case .audioSample(let audio) = onEnum(of: response) {
        // audio.samples: [Float] in [-1.0, 1.0]
        // audio.sampleRate: Int (typically 24000 for audio-gen models)
        audioPlayer.enqueue(samples: audio.samples, sampleRate: Int(audio.sampleRate))
    }
}
Audio input should be 16 kHz; audio output from generation models is typically 24 kHz. Configure your playback pipeline accordingly.