← Back to Vision Models LFM2.5-VL-450M is Liquid AI’s smallest vision-language model, building on LFM2-VL-450M with extended reinforcement learning for improved performance while maintaining the same compact deployment footprint.Documentation Index
Fetch the complete documentation index at: https://liquidai-fix-android-sdk-qa-issues.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Specifications
| Property | Value |
|---|---|
| Parameters | 450M |
| Context Length | 32K tokens |
| Architecture | LFM2.5-VL (Dense) |
Ultra-Light
Minimal memory footprint
Low Latency
Fastest vision model inference
Edge-Ready
Runs on mobile and embedded devices
Quick Start
- Transformers
- vLLM
- SGLang
- llama.cpp