Skip to main content

Documentation Index

Fetch the complete documentation index at: https://liquidai-fix-android-sdk-qa-issues.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

This page covers error types, serialization helpers, and a few platform-specific entry points that don’t fit in the main reference pages.

Errors

Errors are subclasses of LeapException (LeapError is a type alias for LeapException provided for backward compatibility). The most common subclasses:
  • LeapModelLoadingException — problems reading or validating the model bundle.
  • LeapGenerationException — unexpected native inference errors.
  • LeapGenerationPromptExceedContextLengthException — prompt length exceeded the configured context size.
  • LeapSerializationException — JSON encoding/decoding problems on chat history or function calls.
Handle thrown errors with do / catch on the async streams returned by Conversation.generateResponse(...), or downcast with if let err = error as? LeapModelLoadingException { ... } to inspect a specific subclass.

Serialization

ChatMessage, ChatMessageContent, LeapFunctionCall, and Manifest are serializable on every platform and round-trip cleanly into JSON compatible with OpenAI’s chat-completions schema.
Use Conversation.exportToJSON() to get an OpenAI-shaped JSON string, then route restores back through Kotlin’s serializer (there is no ChatMessage(from: [String: Any]) initializer):
// Serialize the conversation history (compact JSON string, OpenAI chat-completions shape)
let jsonString: String = conversation.exportToJSON()
let data: Data = Data(jsonString.utf8)

// Restore — `LeapJson.decodeFromString(...)` is the Kotlin-side decoder.
// For Swift-only round trips, persist the `jsonString` and pass it back to
// `modelRunner.createConversationFromHistory(history:)` after rebuilding the
// `[ChatMessage]` list via your shared Kotlin code, or use a server-side
// round-trip that talks to your sync backend.
Persist data to disk, UserDefaults, or your sync backend. On restore, decode the JSON via Kotlin’s LeapJson (re-exported through SKIE) into a [ChatMessage] and rebuild via modelRunner.createConversationFromHistory(history:). There is no Swift-native dictionary-based ChatMessage initializer — the Kotlin serializer is the source of truth on both platforms.

Android LeapModelDownloader internals

This section is Android-only. iOS / macOS callers use the Swift ModelDownloader (shipped in the LeapModelDownloader SPM product), which routes transfers through URLSession — see Model Loading → Constructing the downloader for background-session configuration. The cross-platform LeapDownloader (used directly on JVM, Linux native, Windows native) is a plain async fetcher with no platform background-service hooks.
Beyond the high-level loadModel / loadSimpleModel methods covered in Model Loading, the Android LeapModelDownloader exposes a few lower-level methods for WorkManager background staging and status polling.

Permission setup

requestDownloadModel(...) enqueues a WorkManager download worker. During transfer, the worker runs in the foreground and displays notifications, so declare these in your AndroidManifest.xml:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.POST_NOTIFICATIONS" />
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
<uses-permission android:name="android.permission.FOREGROUND_SERVICE_DATA_SYNC" />
On Android 13+ (API 33) request POST_NOTIFICATIONS at runtime:
private val requestPermissionLauncher = registerForActivityResult(
    ActivityResultContracts.RequestPermission()
) { isGranted ->
    if (isGranted) Log.d(TAG, "Notification permission granted")
    else Log.w(TAG, "Notification permission denied — downloads will still run, no UI")
}

if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.TIRAMISU) {
    if (ContextCompat.checkSelfPermission(
            this, android.Manifest.permission.POST_NOTIFICATIONS
        ) != PackageManager.PERMISSION_GRANTED
    ) {
        requestPermissionLauncher.launch(android.Manifest.permission.POST_NOTIFICATIONS)
    }
}

Status polling API

class LeapModelDownloader(
    private val context: Context,
    modelFileDir: File? = null,
    private val notificationConfig: LeapModelDownloaderNotificationConfig = LeapModelDownloaderNotificationConfig(),
    private val downloaderConfig: LeapDownloaderConfig = LeapDownloaderConfig(),
    private val ioDispatcher: CoroutineDispatcher = Dispatchers.IO,
) {
    suspend fun requestDownloadModel(modelName: String, quantizationType: String, forceDownload: Boolean = false)
    suspend fun requestStopDownload(modelName: String, quantizationType: String)
    suspend fun queryStatus(modelName: String, quantizationType: String): ModelDownloadStatus
    fun observeDownloadProgress(modelName: String, quantizationType: String): StateFlow<ModelDownloadProgress?>
    fun getModelResourceFolder(modelName: String, quantizationType: String): File

    @Deprecated("No longer needed with WorkManager - downloads are managed automatically")
    suspend fun requestStopService()

    // `ModelDownloadStatus` is nested under `LeapModelDownloader`.
    sealed interface ModelDownloadStatus {
        data object NotOnLocal : ModelDownloadStatus
        data class DownloadInProgress(
            val totalSizeInBytes: Long,
            val downloadedSizeInBytes: Long,
        ) : ModelDownloadStatus
        data class Downloaded(val totalSizeInBytes: Long) : ModelDownloadStatus
    }

    class ModelDownloadProgress {
        var totalSizeInBytes: Long
        var downloadedSizeInBytes: Long
        val progress: Double
    }
}
Refer to the nested status type as LeapModelDownloader.ModelDownloadStatus.NotOnLocal / .DownloadInProgress / .Downloaded on Android — the Android downloader does not expose a top-level ai.liquid.leap.downloader.ModelDownloadStatus. (Apple ships a top-level ai.liquid.leap.downloader.ModelDownloadStatus sealed interface with a different payload — DownloadInProgress(progress: Double) and a data object Downloaded with no size — so don’t share status-decoding code unmodified across platforms.)
  • requestDownloadModel — suspend fire-and-forget prefetch. It enqueues a unique WorkManager download worker; the download itself survives app restarts, and the call returns after staging the work request.
  • requestStopDownload — suspend; cancels an in-flight background download.
  • queryStatus — suspend one-shot status check.
  • observeDownloadProgress — StateFlow<ModelDownloadProgress?> for UI updates during a background download. It emits null when no download is active.
  • getModelResourceFolder — the directory the SDK will use for this model+quantization on disk.
  • requestStopService — @Deprecated no-op since v0.10.6 (WorkManager handles the worker lifecycle automatically). Kept for source compatibility; new code shouldn’t call it.

Removing a downloaded model

Use the Android downloader’s resource folder to clean up disk, or construct a cross-platform LeapDownloader with the same saveDir and call its instance method deleteModelResources(...):
val resourceFolder = downloader.getModelResourceFolder(
    modelName = "LFM2-1.2B",
    quantizationType = "Q5_K_M",
)
resourceFolder.deleteRecursively()

// Equivalent when you know the saveDir:
LeapDownloader(LeapDownloaderConfig(saveDir = resourceFolder.parentFile!!.absolutePath)).deleteModelResources(
    modelName = "LFM2-1.2B",
    quantizationType = "Q5_K_M",
)

Putting it together

A minimal end-to-end snippet exercising load → conversation → tool registration → constrained generation → streaming.
let caches = FileManager.default.urls(for: .cachesDirectory, in: .userDomainMask).first!.path
let modelsDir = (caches as NSString).appendingPathComponent("leap_models")
let downloader = ModelDownloader(config: LeapDownloaderConfig(saveDir: modelsDir))

let runner = try await downloader.loadModel(
  modelName: "LFM2.5-1.2B-Instruct",
  quantizationType: "Q4_K_M"
)
let conversation = runner.createConversation(systemPrompt: "You are a travel assistant.")

conversation.registerFunction(function: weatherFunction)

let options = GenerationOptions()
  .with(temperature: 0.3)
  .with(minP: 0.15)
  .with(repetitionPenalty: 1.05)
  .with(jsonSchema: TripRecommendation.jsonSchema())

let userMessage = ChatMessage(
  role: .user,
  textContent: "Plan a 3-day trip to Kyoto with food highlights"
)

for try await response in conversation.generateResponse(
  message: userMessage,
  generationOptions: options
) {
  process(response)
}
See also: Quick Start, Function Calling, Constrained Generation.