
Ramin Hasani
@ramin_m_h
building @LiquidAI_
قد يعجبك
Building a foundation model is an art! it involves many complex dimensions and stages, from architecture, data, pre-training, post-training to inference. getting it all right requires masterful and tasteful execution. there are very few teams around the world that can make…
Day 1 of the @LiquidAI_ fine-tuning hackathon in Tokyo this weekend. Jointly organized with @weights_biases and @LambdaAPI
Liquid AI Releases LFM2-8B-A1B: An On-Device Mixture-of-Experts with 8.3B Params and a 1.5B Active Params per Token How much capability can a sparse 8.3B-parameter MoE with a ~1.5B active path deliver on your phone without blowing latency or memory? Liquid AI has released…
Btw, it should get faster on the next version of MLX Swift. We made some improvements to 1D grouped convs that will speed up this model nicely.
Apple wasn’t kidding, the iPhone 17 Pro is really built for running LLMs Here’s LFM2 8B A1B by @LiquidAI_ running on-device with MLX in @LocallyAIApp, the iPhone runs the 8B model with zero struggle Thanks @Prince_Canuma for the port to MLX, it made the MLX Swift port possible
Hello everyone! Let me (re)introduce myself!
People should give a try on @LiquidAI_ models with Spectrum. You can SFT/RLFT your models with VERY LOW memory footprint, without having to do LoRA or qLoRA... This beautiful thing prevents a lot of the catastrophic forgetting. LFM models work out of the box.

I added LFM 2 8B A1B in @LocallyAIApp for iPhone 17 Pro and iPhone Air The first mixture of experts model by @LiquidAI_, 8B total parameters (1B active), performance similar to 3-4B models but speed of a 1B model Runs great on the 17 Pro with Apple MLX

We just released LFM2-8B-A1B, a small MoE optimized for latency-sensitive applications on-device. Larger model quality with the speed of a 1.5B class model. Huggingface: huggingface.co/LiquidAI/LFM2-… Blog: liquid.ai/blog/lfm2-8b-a…
Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

LFM2-8B-A1B just dropped on @huggingface! 8.3B params with only 1.5B active/token 🚀 > Quality ≈ 3–4B dense, yet faster than Qwen3-1.7B > MoE designed to run on phones/laptops (llama.cpp / vLLM) > Pre-trained on 12T tokens → strong math/code/IF
Small MoEs are on the rise. @LiquidAI_ drops LFM2-8B-A1B.
LFM2-8B-A1B just dropped on @huggingface! 8.3B params with only 1.5B active/token 🚀 > Quality ≈ 3–4B dense, yet faster than Qwen3-1.7B > MoE designed to run on phones/laptops (llama.cpp / vLLM) > Pre-trained on 12T tokens → strong math/code/IF
Enjoy our even better on-device model! 🐘 Running on @amd AI PCs with the fastest inference profile!
Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

Meet LFM2-8B-A1B by @LiquidAI_ - 8B total and 1B active params 🐘 - 5x faster on CPUs and GPUs ⚡️ - Perfect for fast, private, edge 📱/💻/🚗/🤖


Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

LFM2-8B-A1B Liquid AI’s first on-device MoE, with 8.3B total parameters and 1.5B active per token. It matches 3–4B dense model quality while running faster than Qwen3-1.7B. Architecture - 18 gated short-conv blocks, 6 GQA blocks (LFM2 backbone) - Sparse MoE feed-forward layers…

LFM2-Audio-1.5B Liquid AI’s first end-to-end audio foundation model, built for real-time conversation at only 1.5B parameters. Competitive with much larger models, it unifies speech and text without separate ASR or TTS. Architecture - LFM2 multimodal backbone - FastConformer…


Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

It's a good model sir. Very proud of the team, we worked very hard to be on the Pareto frontier of quality and efficiency. Even had the chance to write a CPU-optimized kernel for MoE to squeeze everything from the hardware, and that gave us those sweet throughput results.
Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

Awesome TTS model built on LFM2-350M
Just dropped on HF: kani-tts-370m A lightweight open-source text-to-speech model that sounds great and runs fast! > 370M parameters — efficient and deployable on consumer GPUs > NanoCodec + LFM2-350M > Natural & expressive voice trained with modern neural TTS techniques > Fast…
Just dropped on HF: kani-tts-370m A lightweight open-source text-to-speech model that sounds great and runs fast! > 370M parameters — efficient and deployable on consumer GPUs > NanoCodec + LFM2-350M > Natural & expressive voice trained with modern neural TTS techniques > Fast…
United States الاتجاهات
- 1. Cowboys 64.6K posts
- 2. Fred Warner 5,480 posts
- 3. Panthers 64.4K posts
- 4. Ravens 60.9K posts
- 5. Browns 58.8K posts
- 6. #KeepPounding 6,547 posts
- 7. Dolphins 43.6K posts
- 8. Eberflus 8,621 posts
- 9. Colts 53.1K posts
- 10. Steelers 61.2K posts
- 11. Rico Dowdle 8,292 posts
- 12. Drake Maye 19.2K posts
- 13. Chargers 50.6K posts
- 14. James Franklin 47.8K posts
- 15. Penn State 64.6K posts
- 16. Pickens 16.4K posts
- 17. #FTTB 2,533 posts
- 18. Herbert 14.5K posts
- 19. Dillon Gabriel 4,145 posts
- 20. #HereWeGo 6,125 posts
قد يعجبك
-
Symmetry and Geometry in Neural Representations
@neur_reps -
Danijar Hafner
@danijarh -
Jascha Sohl-Dickstein
@jaschasd -
Yi Ma
@YiMaTweets -
Ofir Nachum
@ofirnachum -
ICLR 2026
@iclr_conf -
Jonathan Frankle
@jefrankle -
Taco Cohen
@TacoCohen -
Cohere Labs
@Cohere_Labs -
Karol Hausman
@hausman_k -
Miles Cranmer
@MilesCranmer -
Nathan Lambert
@natolambert -
Paul Liang
@pliang279 -
Johannes Brandstetter
@jo_brandstetter -
Behnam Neyshabur
@bneyshabur
Something went wrong.
Something went wrong.