Documentation Index
Fetch the complete documentation index at: https://liquidai-fix-android-sdk-qa-issues.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
The leap-ui module (introduced in v0.10.0) ships a ready-to-use voice assistant widget โ an animated orb, mic button, and status label โ backed by a state machine that handles recording, generation, and audio playback. Wire it to a model and it handles the rest.
leap-ui is a Compose Multiplatform module, so the same widget runs on:
- iOS โ bridged to UIKit via
VoiceAssistantViewController and exposed to SwiftUI through UIViewControllerRepresentable.
- macOS โ bridged to AppKit via
VoiceAssistantNSViewController. SwiftUI hosts via NSViewControllerRepresentable + NSHostingController.
- Android โ direct Compose for Android.
- JVM Desktop โ Compose for Desktop. Same Maven artifact; you provide audio I/O implementations (the demo apps in
leap-ui-demo/ ship patterns you can adapt).
- Web (Wasm, experimental) โ present in the source tree (
leap-ui-demo/web) but not yet covered by the v0.10.6 stable release notes โ treat as preview.
Add the dependency
iOS / macOS (SPM)
Android / JVM (Gradle)
Add the LeapUI product to your target alongside LeapModelDownloader (the SPM product whose Swift ModelDownloader class is what the snippets below use to load the audio model). See the Quick Start for the full SPM setup.dependencies: [
.package(url: "https://github.com/Liquid4All/leap-sdk.git", from: "0.10.6")
]
targets: [
.target(
name: "YourApp",
dependencies: [
.product(name: "LeapModelDownloader", package: "leap-sdk"),
.product(name: "LeapUI", package: "leap-sdk"),
]
)
]
In Swift sources, import LeapUi (lowercase i โ thatโs the binary-target module name).Dual-import opt-out required for this combination. LeapUI transitively bundles LeapSDK, and LeapModelDownloader re-exports the same Kotlin types under its own framework module, so the dual-import build-time guard fires #error at preprocessing time unless you opt out. Add LEAP_DUAL_IMPORT_ALLOW=1 to OTHER_CFLAGS for the affected target, and qualify ambiguous Swift type references with the source module (LeapSDK.Conversation vs. LeapModelDownloader.Conversation) or stick to a single import per file.If youโd rather avoid the opt-out, swap LeapModelDownloader for LeapSDK in the target dependencies and rewrite the snippets below to use LeapDownloader(config:).loadModel(modelName:, quantizationType:) โ the cross-platform loader has the same shape minus the URLSession background-session integration. dependencies {
implementation("ai.liquid.leap:leap-sdk:0.10.6")
implementation("ai.liquid.leap:leap-ui:0.10.6")
}
leap-ui depends on Compose runtime, foundation, and material3 internally (with implementation scope), so the runtime artifacts are pulled in but their APIs are not re-exported to consumer source. If your project uses Compose directly, declare the same Compose dependencies in your own module.
Architecture
VoiceAssistantWidget (Compose UI)
โ intents
VoiceAssistantStore (state machine: IDLE โ LISTENING โ RESPONDING โ IDLE)
โ uses
VoiceAudioRecorder + VoiceAudioPlayer + VoiceConversation
VoiceAssistantStore owns the session lifecycle. Instantiate once when the screen appears; close() it when it goes away.
VoiceConversation is a thin interface you implement to bridge the store to your model. Wrap the SDKโs Conversation.generateResponse and forward AudioSample chunks to onAudioChunk.
- Audio I/O uses
VoiceAudioRecorder / VoiceAudioPlayer interfaces. iOS / macOS ship AppleAudioRecorder and AppleAudioPlayer defaults; Android / JVM reference implementations live in leap-ui-demo/.
Wire the model
The VoiceConversation adapter looks similar on every platform โ both implementations stream audio samples back through onAudioChunk.
Swift (iOS / macOS)
Kotlin (Android)
The factory VoiceAssistantStore.makeForApple() hides Kotlin coroutine plumbing from Swift callers. It creates the store with a MainScope(), the default Apple audio recorder and player, and an EMA-smoothed amplitude.import LeapModelDownloader
import LeapUi
@MainActor
final class VoiceAssistantViewModel: ObservableObject {
let store: VoiceAssistantStore
private let downloader: ModelDownloader = {
let caches = FileManager.default.urls(for: .cachesDirectory, in: .userDomainMask).first!.path
let modelsDir = (caches as NSString).appendingPathComponent("leap_models")
return ModelDownloader(config: LeapDownloaderConfig(saveDir: modelsDir))
}()
init() {
// Defaults: AppleAudioRecorder, AppleAudioPlayer, MainScope, interruptToSpeak = true
store = VoiceAssistantStore.makeForApple()
}
deinit { store.close() }
func loadModel() async {
do {
let runner = try await downloader.loadModel(
modelName: "LFM2.5-Audio-1.5B",
quantizationType: "Q4_0",
downloadProgress: { fraction, _ in
// `fraction` is `Double` from the Kotlin (Double, Long) -> Unit
// closure; `setModelProgress.fraction` is `Float`, so cast.
Task { @MainActor in
self.store.setModelProgress(
fraction: Float(fraction),
message: "Downloading (\(Int(fraction * 100))%)"
)
}
}
)
let conversation = runner.createConversation(
systemPrompt: "Respond with interleaved text and audio."
)
store.setConversation(conv: AppleVoiceConversation(conversation: conversation))
} catch {
store.setModelError(message: "โ \(error.localizedDescription)")
}
}
}
Override defaults via the same makeForApple factory parameters:let store = VoiceAssistantStore.makeForApple(
recorder: myCustomRecorder,
player: myCustomPlayer,
smoothingAlpha: 0.3,
playbackTimeoutMs: 10_000,
interruptToSpeak: false // Press during a response only cancels; doesn't re-record immediately
)
import ai.liquid.leap.downloader.LeapModelDownloader
import ai.liquid.leap.ui.VoiceAssistantIntent
import ai.liquid.leap.ui.VoiceAssistantStore
import ai.liquid.leap.ui.VoiceAssistantStoreState
import android.app.Application
import androidx.lifecycle.AndroidViewModel
import androidx.lifecycle.viewModelScope
import kotlinx.coroutines.flow.StateFlow
import kotlinx.coroutines.launch
class VoiceAssistantViewModel(application: Application) : AndroidViewModel(application) {
private val recorder = AndroidAudioRecorder() // see "Audio I/O implementations" below
private val player = AndroidAudioPlayer()
val store = VoiceAssistantStore(recorder = recorder, player = player, scope = viewModelScope)
val state: StateFlow<VoiceAssistantStoreState> = store.state
private val downloader = LeapModelDownloader(application)
init { viewModelScope.launch { loadModel() } }
fun processIntent(intent: VoiceAssistantIntent) = store.processIntent(intent)
private suspend fun loadModel() = runCatching {
store.setModelProgress(0f, "Resolving manifestโฆ")
val runner = downloader.loadModel(
modelName = "LFM2.5-Audio-1.5B",
quantizationType = "Q4_0",
progress = { pd ->
val pct = if (pd.total > 0) " (${(pd.bytes * 100 / pd.total).toInt()}%)" else ""
store.setModelProgress(
fraction = if (pd.total > 0) pd.bytes.toFloat() / pd.total else 0f,
message = "Downloading$pct",
)
},
)
store.setConversation(
LeapVoiceConversation(
conv = runner.createConversation(systemPrompt = "Respond with interleaved text and audio.")
)
)
}.onFailure { e -> store.setModelError("โ ${e.message}") }
override fun onCleared() {
super.onCleared()
store.close()
}
}
import LeapUi
import SwiftUI
struct VoiceAssistantScreen: View {
@StateObject private var viewModel = VoiceAssistantViewModel()
var body: some View {
VoiceWidgetRepresentable(store: viewModel.store)
.background(Color.black)
.ignoresSafeArea()
.task { await viewModel.loadModel() }
}
}
private struct VoiceWidgetRepresentable: UIViewControllerRepresentable {
let store: VoiceAssistantStore
func makeUIViewController(context: Context) -> UIViewController {
VoiceAssistantViewControllerKt.VoiceAssistantViewController(
state: store.widgetStateHolder,
onIntent: { intent in store.processIntent(intent: intent) },
labels: VoiceWidgetLabels(
idle: "Tap and hold to speak",
listening: "Listening",
responding: "Generating",
micStartDescription: "Start recording",
micStopDescription: "Stop recording",
micCancelDescription: "Cancel recording"
),
colors: VoiceWidgetColors.companion.Default,
showPoweredBy: true
)
}
func updateUIViewController(_ uiViewController: UIViewController, context: Context) {}
}
Swap UIViewControllerRepresentable โ NSViewControllerRepresentable, UIViewController โ NSViewController, and VoiceAssistantViewController โ VoiceAssistantNSViewController. Everything else (the view model, store, conversation) is unchanged.import LeapUi
import SwiftUI
private struct VoiceWidgetRepresentable: NSViewControllerRepresentable {
let store: VoiceAssistantStore
func makeNSViewController(context: Context) -> NSViewController {
VoiceAssistantNSViewControllerKt.VoiceAssistantNSViewController(
state: store.widgetStateHolder,
onIntent: { intent in store.processIntent(intent: intent) },
labels: VoiceWidgetLabels(/* same labels */),
colors: VoiceWidgetColors.companion.Default,
showPoweredBy: true
)
}
func updateNSViewController(_ nsViewController: NSViewController, context: Context) {}
}
import ai.liquid.leap.ui.VoiceAssistantWidget
import android.os.Bundle
import androidx.activity.ComponentActivity
import androidx.activity.compose.setContent
import androidx.compose.foundation.background
import androidx.compose.foundation.layout.fillMaxSize
import androidx.compose.material3.MaterialTheme
import androidx.compose.material3.darkColorScheme
import androidx.compose.runtime.collectAsState
import androidx.compose.runtime.getValue
import androidx.compose.ui.Modifier
import androidx.compose.ui.graphics.Color
import androidx.lifecycle.viewmodel.compose.viewModel
class MainActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
MaterialTheme(colorScheme = darkColorScheme(background = Color.Black)) {
val vm = viewModel<VoiceAssistantViewModel>()
val state by vm.state.collectAsState()
VoiceAssistantWidget(
state = state.widgetState,
onIntent = vm::processIntent,
modifier = Modifier.fillMaxSize().background(Color.Black),
)
}
}
}
}
Compose for Desktop on JVM uses the same VoiceAssistantWidget composable inside a Window { ... } block.
Implement VoiceConversation
The store calls into a VoiceConversation you provide. A minimal adapter that wraps a normal Conversation:
Swift (iOS / macOS)
Kotlin (all platforms)
The VoiceConversation protocol comes from LeapUI, so its audioSamples and onAudioChunk parameters use LeapUi.KotlinFloatArray / LeapUi.KotlinInt โ not native Swift [Float] / Int32. The on-device runner lives in LeapSDK, which has its own LeapSDK.KotlinFloatArray. Bridge between the two via the floatArrayToNSData / nsDataToFloatArray helpers exposed in both frameworks (see leap-ui-demo/shared/AppleVoiceConversation.swift for the canonical pattern).import LeapModelDownloader
import LeapSDK
import LeapUi
final class AppleVoiceConversation: VoiceConversation {
private let conversation: Conversation
init(conversation: Conversation) {
self.conversation = conversation
}
// Note: this method is `__generateResponse` in the SKIE-generated overlay
// because `LeapUI` and `LeapSDK` are separate frameworks with separate Kotlin
// runtimes. The runtime-types-as-parameters force the underscore prefix.
func generateResponse(
audioSamples: LeapUi.KotlinFloatArray,
sampleRate: Int32,
onAudioChunk: @escaping (LeapUi.KotlinFloatArray, LeapUi.KotlinInt) -> Void
) async throws -> Leap_sdkGenerationStats? {
// LeapUi.KotlinFloatArray -> Swift [Float] (for use inside this method body):
let nsData = LeapUi.ArrayConversionsKt.floatArrayToNSData(array: audioSamples)
let samples: [Float] = nsData.withUnsafeBytes { Array($0.bindMemory(to: Float.self)) }
let audioContent = ChatMessageContent.fromFloatSamples(samples, sampleRate: Int(sampleRate))
let userMessage = ChatMessage(
role: .user,
content: [audioContent as ChatMessageContent],
reasoningContent: nil,
functionCalls: nil
)
var stats: Leap_sdkGenerationStats?
for try await response in conversation.generateResponse(message: userMessage) {
switch onEnum(of: response) {
case .audioSample(let chunk):
// Bridge LeapSDK.KotlinFloatArray -> LeapUi.KotlinFloatArray via NSData.
let data = LeapSDK.ArrayConversionsKt.floatArrayToNSData(array: chunk.samples)
let uiSamples = LeapUi.ArrayConversionsKt.nsDataToFloatArray(data: data)
onAudioChunk(uiSamples, LeapUi.KotlinInt(value: chunk.sampleRate))
case .complete(let c):
stats = c.stats
case .chunk, .reasoningChunk, .functionCalls:
break
}
}
return stats
}
func reset() -> VoiceConversation {
AppleVoiceConversation(
conversation: conversation.modelRunner.createConversation(systemPrompt: nil)
)
}
}
import ai.liquid.leap.Conversation
import ai.liquid.leap.audio.FloatAudioBuffer
import ai.liquid.leap.message.ChatMessage
import ai.liquid.leap.message.ChatMessageContent
import ai.liquid.leap.message.GenerationStats
import ai.liquid.leap.message.MessageResponse
import ai.liquid.leap.ui.VoiceConversation
class LeapVoiceConversation(private val conv: Conversation) : VoiceConversation {
override suspend fun generateResponse(
audioSamples: FloatArray,
sampleRate: Int,
onAudioChunk: (samples: FloatArray, sampleRate: Int) -> Unit,
): GenerationStats? {
// Send raw float32 PCM directly โ no WAV re-encode needed.
val userMessage = ChatMessage(
role = ChatMessage.Role.USER,
content = listOf(ChatMessageContent.AudioPcmF32(audioSamples, sampleRate)),
)
var stats: GenerationStats? = null
conv.generateResponse(userMessage).collect { response ->
when (response) {
is MessageResponse.AudioSample -> onAudioChunk(response.samples, response.sampleRate)
is MessageResponse.Complete -> stats = response.stats
else -> Unit
}
}
return stats
}
override fun reset(): VoiceConversation =
LeapVoiceConversation(conv.modelRunner.createConversation())
}
Audio I/O implementations
The VoiceAudioRecorder and VoiceAudioPlayer contracts are short. Substitute your own implementations when the defaults donโt fit.
interface VoiceAudioRecorder {
val amplitude: Float // 0..1 RMS, drives orb animation
val nativeSampleRate: Int // Available after start()
fun start(): Boolean
suspend fun stop(): FloatArray
suspend fun cancel()
}
interface VoiceAudioPlayer {
val amplitude: Float
fun enqueue(samples: FloatArray, sampleRate: Int)
suspend fun waitForPlayback()
fun stop()
}
iOS / macOS
Android
JVM Desktop
AppleAudioRecorder and AppleAudioPlayer are the shipped defaults โ makeForApple() wires them up automatically. Implement the protocols directly if you need to integrate with custom AVAudioEngine pipelines.iOS apps must configure AVAudioSession for record + playback before the model starts streaming audio:import AVFoundation
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker])
try session.setActive(true)
session.requestRecordPermission { _ in }
Add NSMicrophoneUsageDescription to your Info.plist.AndroidAudioRecorder and AndroidAudioPlayer arenโt part of leap-ui โ theyโre reference implementations shipped with the demo app at leap-ui-demo/android/src/main/kotlin/ai/liquid/leap/uidemo/AudioPipeline.kt. Copy the file into your project, or implement the contracts against your own audio stack.Required permissions:<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.INTERNET" />
Request RECORD_AUDIO at runtime via the standard ActivityResultContracts.RequestPermission() pattern (see Quick Start).No bundled implementations โ use javax.sound.sampled.TargetDataLine for capture and SourceDataLine for playback, wrapped to match the VoiceAudioRecorder / VoiceAudioPlayer contracts. The demo at leap-ui-demo/jvm (if present in your release) ships a working reference.
interruptToSpeak
VoiceAssistantStore (v0.10.0+) exposes an interruptToSpeak: Boolean = true parameter controlling what happens when the user presses the orb during a response:
true (default) โ cancels the in-flight generation and immediately starts a new recording.
false โ only cancels. The user must press again to start a new recording.
Swift (iOS / macOS)
Kotlin (all platforms)
let store = VoiceAssistantStore.makeForApple(interruptToSpeak: false)
val store = VoiceAssistantStore(
recorder = recorder,
player = player,
scope = viewModelScope,
interruptToSpeak = false,
)
Whatโs in the module
| Symbol | Purpose |
|---|
VoiceAssistantStore | State machine + orchestrator. Apple platforms: makeForApple(). |
VoiceAssistantStateHolder | Compose-friendly state container, exposed to Swift. |
VoiceAssistantWidget (Compose) | The widget itself. Drop into any Compose tree (Android, JVM, iOS via host controller, macOS via host controller). |
VoiceAssistantViewController (UIKit) / VoiceAssistantNSViewController (AppKit) | Pre-built hosts for Apple. |
AppleAudioRecorder / AppleAudioPlayer | Default audio I/O on iOS / macOS. |
VoiceConversation | Adapter interface you implement to bridge the store to a Conversation. |
VoiceWidgetLabels, VoiceWidgetColors | Theming (use .companion.Default to access the canonical palette). |
Compatible models
Voice mode requires a model that emits audio output. The shipped demo uses LFM2.5-Audio-1.5B at Q4_0 quantization, with a system prompt of โRespond with interleaved text and audio.โ See the LEAP Model Library for other audio-capable models.