Documentation Index
Fetch the complete documentation index at: https://liquidai-fix-android-sdk-qa-issues.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
ChatMessage and ChatMessageContent mirror the OpenAI chat-completions message schema. Both are declared once in commonMain (data class ChatMessage, sealed class ChatMessageContent) and Kotlin/Native + SKIE bridge the Kotlin types into Swift β there are no separate βnativeβ Swift declarations.
ChatMessage
Swift (iOS / macOS)
Kotlin (all platforms)
The Swift class is generated from the Kotlin data class. Kotlin parameter defaults donβt propagate, so the primary init requires all four arguments explicitly:public class ChatMessage {
public var role: ChatMessage.Role
public var content: [ChatMessageContent]
public var reasoningContent: String?
public var functionCalls: [LeapFunctionCall]?
// Primary init β pass `reasoningContent: nil, functionCalls: nil` for ordinary messages.
public init(
role: ChatMessage.Role,
content: [ChatMessageContent],
reasoningContent: String?,
functionCalls: [LeapFunctionCall]?
)
// Secondary inits (from Kotlin secondary constructors):
public init(role: ChatMessage.Role, content: ChatMessageContent) // single content
public init(role: ChatMessage.Role, textContent: String) // plain text
public enum Role {
case system, user, assistant, tool
}
}
@Serializable(with = ChatMessageJsonSerializer::class)
data class ChatMessage(
val role: Role,
val content: List<ChatMessageContent>,
@SerialName("reasoning_content") val reasoningContent: String? = null,
@SerialName("tool_calls") val functionCalls: List<LeapFunctionCall>? = null,
) {
// Single-content secondary ctor (wraps the part in a list, drops defaults).
constructor(role: Role, content: ChatMessageContent)
// Plain-text secondary ctor (parameter name is `textContent`).
constructor(role: Role, textContent: String)
enum class Role(val type: String) {
SYSTEM("system"),
USER("user"),
ASSISTANT("assistant"),
TOOL("tool");
companion object {
fun fromTypeString(type: String): Role // throws LeapSerializationException on unknown values
}
}
}
ChatMessage is @Serializable via the dedicated ChatMessageJsonSerializer β encode/decode through kotlinx.serialization.json.Json rather than ad-hoc JSONObject helpers. See Utilities β Serialization.
role β the speaker (user, system, assistant, or tool). Use tool when appending function-call results back into the history.
content β ordered fragments. Supported part types: Text, Image (JPEG bytes wrapped in a data URL), Audio (WAV bytes or input_audio payload), and on Kotlin AudioPcmF32 for raw float samples.
reasoningContent β text emitted by reasoning models inside <think> / </think> tags. null for non-reasoning responses.
functionCalls β calls returned by MessageResponse.FunctionCalls on the previous turn, included when appending tool-call results to history.
Serialization
Round-trip the message through kotlinx.serialization β there is no separate βfrom [String: Any]β initializer on either platform.
Swift (iOS / macOS)
Kotlin (all platforms)
Encode with LeapJson.encodeToString (or your own JSONEncoder against the OpenAI shape) and decode with the matching Kotlin serializer. See Utilities β Serialization for examples that route through LeapJson. ChatMessage is @Serializable. Encode with Json.encodeToString(message) and decode with Json.decodeFromString<ChatMessage>(jsonString) β see Utilities β Serialization. On error, expect a LeapSerializationException (not LeapSerializationError).
ChatMessageContent
Swift (iOS / macOS)
Kotlin (all platforms)
ChatMessageContent is the Kotlin sealed class bridged to Swift β switch on its subclasses with SKIEβs onEnum(of:) helper. There is no native Swift enum, no positional .image(_:) / .audio(_:) factory, and no init(from json:). Use the static factories on the Swift overlay:// Text (cross-platform):
ChatMessageContent.text(_ text: String) -> ChatMessageContent
// Image:
ChatMessageContent.fromJPEGData(_ jpegData: Data) -> ChatMessageContent.Image
ChatMessageContent.image(url: String) -> ChatMessageContent.Image // data URL or remote URL
// Audio:
ChatMessageContent.fromWAVData(_ wavData: Data) -> ChatMessageContent.Audio
ChatMessageContent.audio(data: Data, format: String = "wav") -> ChatMessageContent.Audio
ChatMessageContent.fromFloatSamples(_ samples: [Float], sampleRate: Int, channelCount: Int = 1)
-> ChatMessageContent.AudioPcmF32
// iOS only β UIKit:
public static func fromUIImage(_ image: UIImage) throws -> ChatMessageContent
// (JPEG quality is fixed at 0.85; no compressionQuality parameter is exposed.)
fromUIImage is iOS-only and takes only the image β JPEG compression quality is hard-coded to 0.85 in the overlay (leap-sdk/src/iosMain/.../ChatMessageContentExtensionsIos.kt). There is no fromNSImage factory; on macOS, convert your NSImage to JPEG Data yourself and pass it through fromJPEGData(_:).On the wire, image parts are encoded as OpenAI-style image_url payloads (with a data:image/jpeg;base64,... URL) and audio parts as input_audio arrays with Base64 data.sealed class ChatMessageContent {
data class Text(val text: String) : ChatMessageContent()
data class Image(val imageUrl: ImageUrl) : ChatMessageContent() {
// Convenience secondary ctor β wraps the bytes in a data: URL.
constructor(jpegByteArray: ByteArray)
val jpegByteArray: ByteArray // derived property: decodes the data: URL
// Nested wrapper for the OpenAI `image_url` wire shape.
data class ImageUrl(val url: String)
}
data class Audio(val inputAudio: InputAudio) : ChatMessageContent() {
// Convenience secondary ctor β wraps the bytes in an InputAudio.
constructor(data: ByteArray)
val data: ByteArray // derived property: decodes the base64 InputAudio payload
data class InputAudio(val data: String, val format: String) // base64-encoded `data`
}
// Convenience helpers (declared on the sealed class) wrap raw PCM into Audio:
fun toWavBytes(): ByteArray // on AudioPcmF32 β encodes float samples as 16-bit PCM WAV
fun toAudio(): Audio // on AudioPcmF32 β same bytes wrapped as ChatMessageContent.Audio
data class AudioPcmF32(val samples: FloatArray, val sampleRate: Int) : ChatMessageContent()
}
Serialize via kotlinx.serialization (every variant is @Serializable).Android-specific helper: ImageUtils.fromBitmap(bitmap, compressionQuality = 85) (in ai.liquid.leap.message) re-encodes an Android Bitmap to JPEG and returns a ChatMessageContent.Image. Itβs a suspend function β call it from a coroutine.import ai.liquid.leap.message.ImageUtils
val image: ChatMessageContent.Image = ImageUtils.fromBitmap(bitmap, compressionQuality = 85)
Text β plain text fragment.
Image β JPEG-encoded image bytes. Only vision-capable models can interpret image parts.
Audio β WAV-encoded audio bytes (see audio format requirements below).
AudioPcmF32 (Kotlin) / fromFloatSamples(...) (Swift) β raw float32 mono PCM in memory. Avoids re-encoding when you already have samples.
The LEAP inference engine expects WAV-encoded audio with these specifications:
| Property | Required value | Notes |
|---|
| Container | WAV (RIFF) | Only WAV is supported |
| Sample rate | 16000 Hz recommended | Other rates auto-resampled to 16 kHz |
| Encoding | PCM | Float32, Int16, Int24, or Int32 |
| Channels | Mono (1) | Stereo is rejected |
| Byte order | Little-endian | Standard WAV |
Supported PCM encodings
- Float32 β 32-bit floating point, normalized to [-1.0, 1.0]
- Int16 β 16-bit signed integer (recommended)
- Int24 β 24-bit signed integer
- Int32 β 32-bit signed integer
The engine only accepts WAV. M4A, MP3, AAC, OGG, and other compressed formats are rejected. Convert to WAV before sending.
Mono required. Stereo or multi-channel WAVs are rejected with an error. Downmix to mono first.
Automatic resampling. The engine resamples to 16 kHz when needed, but providing 16 kHz audio directly avoids the resampling overhead. For best quality, record at 16 kHz mono.
Creating audio content
From a WAV file
Swift (iOS / macOS)
Kotlin (all platforms)
let wavURL = Bundle.main.url(forResource: "audio", withExtension: "wav")!
let wavData = try Data(contentsOf: wavURL)
let message = ChatMessage(
role: .user,
content: [
.text("What is being said in this audio?"),
ChatMessageContent.fromWAVData(wavData)
],
reasoningContent: nil,
functionCalls: nil
)
val wavBytes = File("/path/to/audio.wav").readBytes()
val audio = ChatMessageContent.Audio(wavBytes)
val message = ChatMessage(
role = ChatMessage.Role.USER,
content = listOf(
ChatMessageContent.Text("What is being said in this audio?"),
audio
)
)
From raw PCM samples
Swift (iOS / macOS)
Kotlin (all platforms)
// Float samples normalized to [-1.0, 1.0]
let samples: [Float] = [0.1, 0.2, 0.15, -0.3 /* ... */]
let audioContent = ChatMessageContent.fromFloatSamples(
samples,
sampleRate: 16000,
channelCount: 1
)
let message = ChatMessage(
role: .user,
content: [.text("Transcribe this audio"), audioContent],
reasoningContent: nil,
functionCalls: nil
)
import ai.liquid.leap.audio.FloatAudioBuffer
val audioBuffer = FloatAudioBuffer(sampleRate = 16000)
audioBuffer.add(floatArrayOf(0.1f, 0.2f, 0.15f /* ... */))
audioBuffer.add(floatArrayOf(0.3f, 0.25f /* ... */))
val wavBytes = audioBuffer.createWavBytes()
val audio = ChatMessageContent.Audio(wavBytes)
Or skip the WAV encoding entirely with ChatMessageContent.AudioPcmF32(samples, sampleRate) β the engine handles the framing internally and you save the WAV header overhead.
Recording from the microphone
Swift (iOS / macOS)
Kotlin (Android)
Kotlin (JVM)
Configure AVAudioRecorder with WAV-compatible settings:import AVFoundation
let audioURL = FileManager.default.temporaryDirectory
.appendingPathComponent("recording.wav")
let settings: [String: Any] = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: 16000.0, // 16 kHz
AVNumberOfChannelsKey: 1, // Mono
AVLinearPCMBitDepthKey: 16, // 16-bit
AVLinearPCMIsFloatKey: false,
AVLinearPCMIsBigEndianKey: false
]
let recorder = try AVAudioRecorder(url: audioURL, settings: settings)
recorder.record()
// ...
recorder.stop()
let wavData = try Data(contentsOf: audioURL)
let audioContent: ChatMessageContent = ChatMessageContent.fromWAVData(wavData)
Use android.media.AudioRecord or a library like WaveRecorder:import com.github.squti.androidwaverecorder.WaveRecorder
val recorder = WaveRecorder(outputFilePath)
recorder.configureWaveSettings {
sampleRate = 16000
channels = android.media.AudioFormat.CHANNEL_IN_MONO
audioEncoding = android.media.AudioFormat.ENCODING_PCM_16BIT
}
recorder.startRecording()
// ...
recorder.stopRecording()
val wavBytes = File(outputFilePath).readBytes()
val audioContent = ChatMessageContent.Audio(wavBytes)
Use javax.sound.sampled.TargetDataLine:import javax.sound.sampled.AudioFormat
import javax.sound.sampled.AudioSystem
import javax.sound.sampled.TargetDataLine
val format = AudioFormat(16000f, 16, 1, true, false)
val line = AudioSystem.getTargetDataLine(format)
line.open(format)
line.start()
// ... read into a byte buffer, wrap in WAV header ...
line.stop(); line.close()
val audioContent = ChatMessageContent.Audio(wavBytes)
For simpler cases, JVM apps that already have a FloatAudioBuffer-equivalent in their codebase can use ChatMessageContent.AudioPcmF32(samples, sampleRate) directly.
Audio duration
- Minimum β at least 1 second of audio for reliable speech recognition.
- Maximum β bounded by the modelβs context window (typically several minutes).
- Silence β trim excessive silence from the start and end for better results.
Audio output from models
Audio-capable models like LFM2.5-Audio-1.5B emit float32 PCM frames via MessageResponse.AudioSample. Output sample rate is typically 24 kHz (vs. 16 kHz for input).
Swift (iOS / macOS)
Kotlin (all platforms)
for try await response in conversation.generateResponse(message: userMessage) {
if case .audioSample(let audio) = onEnum(of: response) {
// audio.samples: [Float] in [-1.0, 1.0]
// audio.sampleRate: Int (typically 24000 for audio-gen models)
audioPlayer.enqueue(samples: audio.samples, sampleRate: Int(audio.sampleRate))
}
}
conversation.generateResponse(userMessage)
.onEach { response ->
if (response is MessageResponse.AudioSample) {
// response.samples: FloatArray in [-1.0, 1.0]
// response.sampleRate: Int (typically 24000)
audioBuffer.add(response.samples)
}
}
.collect()
Audio input should be 16 kHz; audio output from generation models is typically 24 kHz. Configure your playback pipeline accordingly.