Liquid AI Released LFM2.5-350M: A Compact 350M Parameter Model Trained on 28T Tokens with Scaled Reinforcement Learning
In the current landscape of generative AI, the ‘scaling laws’ have generally dictated that more parameters equal more intelligence. However, Liquid AI is challenging this convention with the release of LFM2.5-350M. This model is actually a technical case study in intelligence density with additional pre-training (from 10T to 28T tokens) and large-scale reinforcement learning The […] The post Liquid AI Released LFM2.5-350M: A Compact 350M Parameter Model Trained on 28T Tokens with Scaled Reinforcement Learning appeared first on MarkTechPost .
In the current landscape of generative AI, the ‘scaling laws’ have generally dictated that more parameters equal more intelligence. However, Liquid AI is challenging this convention with the release of LFM2.5-350M. This model is actually a technical case study in intelligence density with additional pre-training (from 10T to 28T tokens) and large-scale reinforcement learning
The significance of LFM2.5-350M lies in its architecture and training efficiency. While the most AI companies has been focused on frontier models, Liquid AI is targeting the ‘edge’—devices with limited memory and compute—by proving that a 350-million parameter model can outperform models more than twice its size on several evaluated benchmarks.
https://www.liquid.ai/blog/lfm2-5-350m-no-size-left-behind
Architecture: The Hybrid LIV Backbone
The core technical differentiator of the LFM2.5-350M is its departure from the pure Transformer architecture. It utilizes a hybrid structure built on Linear Input-Varying Systems (LIVs).
Traditional Transformers rely entirely on self-attention mechanisms, which suffer from quadratic scaling issues: as the context window grows, the memory and computational requirements for the Key-Value (KV) cache increase. Liquid AI addresses this by using a hybrid backbone consisting of:
-
10 Double-Gated LIV Convolution Blocks: These handle the majority of the sequence processing. LIVs function similarly to advanced Recurrent Neural Networks (RNNs) but are designed to be more parallelizable and stable during training. They maintain a constant-state memory, reducing the I/O overhead.
-
6 Grouped Query Attention (GQA) Blocks: By integrating a small number of attention blocks, the model retains high-precision retrieval and long-range context handling without the full memory overhead of a standard Transformer.
This hybrid approach allows the LFM2.5-350M to support a 32k context window (32,768 tokens) while maintaining an extremely lean memory footprint.
Performance and Intelligence Density
The LFM2.5-350M was pre-trained on 28 trillion tokens with an extremely high training-to-parameter ratio. This ensures that the model’s limited parameter count is utilized to its maximum potential, resulting in high ‘intelligence density.’
Benchmarks and Use Cases
The LFM2.5-350M is a specialist model designed for high-speed, agentic tasks rather than general-purpose reasoning.
BenchmarkScoreIFEval (Instruction Following)76.96GPQA Diamond30.64MMLU-Pro20.01
The high IFEval score indicates the model is efficient at following complex, structured instructions, making it suitable for tool use, function calling, and structured data extraction (e.g., JSON). However, the documentation explicitly states that LFM2.5-350M is not recommended for mathematics, complex coding, or creative writing. For those tasks, the reasoning capabilities of larger parameter counts remain necessary.
https://www.liquid.ai/blog/lfm2-5-350m-no-size-left-behind
Hardware Optimization and Inference Efficiency
A major hurdle for AI devs is the ‘memory wall’—the bottleneck created by moving data between the processor and memory. Because the LFM2.5-350M utilizes LIVs and GQA, it drastically reduces KV cache size, boosting throughput. On a single NVIDIA H100 GPU, the model can reach a throughput of 40.4K output tokens per second at high concurrency.
Liquid AI team reports device-specific low-memory inference results that make local deployment viable:
-
Snapdragon 8 Elite NPU: 169MB peak memory using RunAnywhere Q4.
-
Snapdragon GPU: 81MB peak memory using RunAnywhere Q4.
-
Raspberry Pi 5: 300MB using Cactus Engine int8.
Key Takeaways
-
Extreme Intelligence Density: By training a 350M parameter model on 28 trillion tokens, Liquid AI team achieved an super high 80,000:1 token-to-parameter ratio, allowing it to outperform models more than twice its size on several benchmarks.
-
Hybrid LIV Architecture: The model departs from pure Transformers by using Linear Input-Varying Systems (LIVs) combined with a small number of Grouped Query Attention (GQA) blocks, significantly reducing the memory overhead of the KV cache.
-
Edge-First Efficiency: It is designed for local deployment with a 32k context window and a remarkably low memory footprint—reaching as low as 81MB on mobile GPUs and 169MB on NPUs via specialized inference engines.
-
Specialized Agentic Capability: The model is highly optimized for instruction following (IFEval: 76.96) and tool use, though it is explicitly not recommended for complex coding, mathematics, or creative writing.
-
Massive Throughput: The architectural efficiency enables high-speed utility, processing up to 40.4K output tokens per second on a single H100, making it ideal for high-volume data extraction and real-time classification.
Check out the Technical details and Model Weight. Also, feel free to follow us on Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modeltrainingrelease
Cortex Code in Snowflake: How to Use It Without Burning Credits
Snowflake Cortex Code (CoCo) is like an AI assistant inside Snowsight (and CLI also). You can ask it to write SQL, create dbt models, explore data, help in ML work, and even do some admin tasks. But one thing people don’t realise early — this tool is powerful, but also costly if used wrongly. Bad prompts → more tokens → more credits → surprise bill. Prompt Engineering (this directly impacts cost) CoCo works on token consumption. what you type → counted 2. what it replies → counted If your prompt is vague → more tool calls → more cost. Example: Bad: Help me with my data Good: Create staging model for RAW.SALES.ORDERS with not_null on ORDER_ID Best Practices: Use full table names 2. Be clear about output 3. Keep prompts small 4. Provide business logic upfront 5. Use AGENTS.md for consistency

The Stack Nobody Recommended
The most common question I got after publishing Part 1 was some variation of "why did you pick X instead of Y?" So this post is about that. Every major technology choice, what I actually considered, where I was right, and where I got lucky. I'll be upfront: some of these were informed decisions. Some were "I already know this tool, and I need to move fast." Both are valid, but they lead to different trade-offs down the line. The Backend: FastAPI I come from JavaScript and TypeScript. Years of React on the frontend, Express and Fastify on the backend. When I decided this project would be Python, because that's where the AI/ML ecosystem lives, I needed something that didn't feel foreign. FastAPI clicked immediately. The async/await model, the decorator-based routing, and type hints that actu
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.





Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!