ramin_m_h's profile picture. building @LiquidAI_

Ramin Hasani

@ramin_m_h

building @LiquidAI_

مثبتة

Building a foundation model is an art! it involves many complex dimensions and stages, from architecture, data, pre-training, post-training to inference. getting it all right requires masterful and tasteful execution. there are very few teams around the world that can make…


Ramin Hasani أعاد

Day 1 of the @LiquidAI_ fine-tuning hackathon in Tokyo this weekend. Jointly organized with @weights_biases and @LambdaAPI


Ramin Hasani أعاد

Liquid AI Releases LFM2-8B-A1B: An On-Device Mixture-of-Experts with 8.3B Params and a 1.5B Active Params per Token How much capability can a sparse 8.3B-parameter MoE with a ~1.5B active path deliver on your phone without blowing latency or memory? Liquid AI has released…


Ramin Hasani أعاد

Btw, it should get faster on the next version of MLX Swift. We made some improvements to 1D grouped convs that will speed up this model nicely.


Ramin Hasani أعاد

Apple wasn’t kidding, the iPhone 17 Pro is really built for running LLMs Here’s LFM2 8B A1B by @LiquidAI_ running on-device with MLX in @LocallyAIApp, the iPhone runs the 8B model with zero struggle Thanks @Prince_Canuma for the port to MLX, it made the MLX Swift port possible


Ramin Hasani أعاد

Hello everyone! Let me (re)introduce myself!


Ramin Hasani أعاد

People should give a try on @LiquidAI_ models with Spectrum. You can SFT/RLFT your models with VERY LOW memory footprint, without having to do LoRA or qLoRA... This beautiful thing prevents a lot of the catastrophic forgetting. LFM models work out of the box.

FernandoNetoAi's tweet image. People should give a try on @LiquidAI_ models with Spectrum. You can SFT/RLFT your models with VERY LOW memory footprint, without having to do LoRA or qLoRA... 

This beautiful thing prevents a lot of the catastrophic forgetting. 

LFM models work out of the box.

Ramin Hasani أعاد

I added LFM 2 8B A1B in @LocallyAIApp for iPhone 17 Pro and iPhone Air The first mixture of experts model by @LiquidAI_, 8B total parameters (1B active), performance similar to 3-4B models but speed of a 1B model Runs great on the 17 Pro with Apple MLX

adrgrondin's tweet image. I added LFM 2 8B A1B in @LocallyAIApp for iPhone 17 Pro and iPhone Air

The first mixture of experts model by @LiquidAI_, 8B total parameters (1B active), performance similar to 3-4B models but speed of a 1B model

Runs great on the 17 Pro with Apple MLX

Ramin Hasani أعاد

We just released LFM2-8B-A1B, a small MoE optimized for latency-sensitive applications on-device. Larger model quality with the speed of a 1.5B class model. Huggingface: huggingface.co/LiquidAI/LFM2-… Blog: liquid.ai/blog/lfm2-8b-a…

Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

LiquidAI_'s tweet image. Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘

> LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed.
> Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs.
> Quantized variants fit comfortably on…


Ramin Hasani أعاد

LFM2-8B-A1B just dropped on @huggingface! 8.3B params with only 1.5B active/token 🚀 > Quality ≈ 3–4B dense, yet faster than Qwen3-1.7B > MoE designed to run on phones/laptops (llama.cpp / vLLM) > Pre-trained on 12T tokens → strong math/code/IF


Ramin Hasani أعاد

Small MoEs are on the rise. @LiquidAI_ drops LFM2-8B-A1B.

LFM2-8B-A1B just dropped on @huggingface! 8.3B params with only 1.5B active/token 🚀 > Quality ≈ 3–4B dense, yet faster than Qwen3-1.7B > MoE designed to run on phones/laptops (llama.cpp / vLLM) > Pre-trained on 12T tokens → strong math/code/IF



Enjoy our even better on-device model! 🐘 Running on @amd AI PCs with the fastest inference profile!

Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

LiquidAI_'s tweet image. Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘

> LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed.
> Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs.
> Quantized variants fit comfortably on…


Ramin Hasani أعاد

Meet LFM2-8B-A1B by @LiquidAI_ - 8B total and 1B active params 🐘 - 5x faster on CPUs and GPUs ⚡️ - Perfect for fast, private, edge 📱/💻/🚗/🤖

xanamini's tweet image. Meet LFM2-8B-A1B by @LiquidAI_
- 8B total and 1B active params 🐘
- 5x faster on CPUs and GPUs ⚡️
- Perfect for fast, private, edge 📱/💻/🚗/🤖
xanamini's tweet image. Meet LFM2-8B-A1B by @LiquidAI_
- 8B total and 1B active params 🐘
- 5x faster on CPUs and GPUs ⚡️
- Perfect for fast, private, edge 📱/💻/🚗/🤖

Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

LiquidAI_'s tweet image. Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘

> LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed.
> Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs.
> Quantized variants fit comfortably on…


Ramin Hasani أعاد

LFM2-8B-A1B Liquid AI’s first on-device MoE, with 8.3B total parameters and 1.5B active per token. It matches 3–4B dense model quality while running faster than Qwen3-1.7B. Architecture - 18 gated short-conv blocks, 6 GQA blocks (LFM2 backbone) - Sparse MoE feed-forward layers…

gm8xx8's tweet image. LFM2-8B-A1B
Liquid AI’s first on-device MoE, with 8.3B total parameters and 1.5B active per token. It matches 3–4B dense model quality while running faster than Qwen3-1.7B.

Architecture
- 18 gated short-conv blocks, 6 GQA blocks (LFM2 backbone)
- Sparse MoE feed-forward layers…

LFM2-Audio-1.5B Liquid AI’s first end-to-end audio foundation model, built for real-time conversation at only 1.5B parameters. Competitive with much larger models, it unifies speech and text without separate ASR or TTS. Architecture - LFM2 multimodal backbone - FastConformer…

gm8xx8's tweet image. LFM2-Audio-1.5B
Liquid AI’s first end-to-end audio foundation model, built for real-time conversation at only 1.5B parameters. Competitive with much larger models, it unifies speech and text without separate ASR or TTS.

Architecture
- LFM2 multimodal backbone
- FastConformer…
gm8xx8's tweet image. LFM2-Audio-1.5B
Liquid AI’s first end-to-end audio foundation model, built for real-time conversation at only 1.5B parameters. Competitive with much larger models, it unifies speech and text without separate ASR or TTS.

Architecture
- LFM2 multimodal backbone
- FastConformer…


Ramin Hasani أعاد

Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

LiquidAI_'s tweet image. Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘

> LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed.
> Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs.
> Quantized variants fit comfortably on…

Ramin Hasani أعاد

It's a good model sir. Very proud of the team, we worked very hard to be on the Pareto frontier of quality and efficiency. Even had the chance to write a CPU-optimized kernel for MoE to squeeze everything from the hardware, and that gave us those sweet throughput results.

Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

LiquidAI_'s tweet image. Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘

> LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed.
> Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs.
> Quantized variants fit comfortably on…


Ramin Hasani أعاد

Awesome TTS model built on LFM2-350M

Just dropped on HF: kani-tts-370m A lightweight open-source text-to-speech model that sounds great and runs fast! > 370M parameters — efficient and deployable on consumer GPUs > NanoCodec + LFM2-350M > Natural & expressive voice trained with modern neural TTS techniques > Fast…



Ramin Hasani أعاد

Just dropped on HF: kani-tts-370m A lightweight open-source text-to-speech model that sounds great and runs fast! > 370M parameters — efficient and deployable on consumer GPUs > NanoCodec + LFM2-350M > Natural & expressive voice trained with modern neural TTS techniques > Fast…


Loading...

Something went wrong.


Something went wrong.