Introducing Liquid Nanos ⚛️ — a new family of extremely tiny task-specific models that deliver GPT-4o-class performance while running directly on phones, laptops, cars, embedded devices, and GPUs with the lowest latency and fastest generation speed. > model size: 350M to 2.6B >…

I added LFM 2 8B A1B in @LocallyAIApp for iPhone 17 Pro and iPhone Air The first mixture of experts model by @LiquidAI_, 8B total parameters (1B active), performance similar to 3-4B models but speed of a 1B model Runs great on the 17 Pro with Apple MLX

Meet LFM2-8B-A1B by @LiquidAI_ - 8B total and 1B active params 🐘 - 5x faster on CPUs and GPUs ⚡️ - Perfect for fast, private, edge 📱/💻/🚗/🤖


Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

Enjoy our even better on-device model! 🐘 Running on @amd AI PCs with the fastest inference profile!
Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

The last 90 days we shipped hard at @LiquidAI_.🚢 🐘 LFM2 tiny instances. fastest on-device models 350M, 700M, 1.2B with a flagship new architecture. 🐸 LEAP. our device ai platform, from use-case to model deployment on phones and laptops in 5min. 👁️ LFM2 Vision language…

Today we are broadening access to local AI with the launch of Apollo on Android. The @apolloaiapp is our low-latency cloud-free “playground in your pocket” that allows users to instantly access fast, effective AI - without sacrificing privacy or security. Together, Apollo and…
🔉🤖 The announcement you’ve been waiting for is here: Apollo is available on Android! Now you can easily access all the local, secure AI technology you’ve loved on iOS from whichever phone is in your pocket. Apollo’s low-latency, cloud-free platform and library of small models…

Pushing chinchilla scaling laws for multimodal models with our new line of omni Liquid foundation models! Tech > s2s + TTS + ASR, all in one model! > below 100ms latency > over 10x faster inference > based on our efficient LFM v2 > 56.8 VoiceBench score > fast, privare,…

Coming to MLX-Audio 🚀🔥
LFM2-Audio just dropped! It's a 1.5B model that understands and generates both text and audio Inference 10x faster + quality on par with models 10x larger Available today on @huggingface and our playground 🥳
Semana muy movida con los LLMs, sonnet 4.5, DeepSeek 3.2, GLM 4.6 y en cualquier momento sale Gemini 3 Pero personalmente me parece mas relevante el avance de los SLM, acaban de sacar LFM2 de @LiquidAI_ Entrada audio +texto, Salida audio+texto Con solo 1.5B de params, puede…

LFM2-Audio just dropped! It's a 1.5B model that understands and generates both text and audio Inference 10x faster + quality on par with models 10x larger Available today on @huggingface and our playground 🥳
Today, we expand our LFM2 family to audio. 👂👄 LFM2-Audio is an end-to-end audio-text omni foundation model, and delivers responsive, real-time conversation on-device at just 1.5B parameters. One model. Seamless multimodal support. No chains. > Speech-to-speech >…
United States 趨勢
- 1. Auburn 26.3K posts
- 2. #UFCRio 51.8K posts
- 3. Kyle Tucker 1,784 posts
- 4. Penn State 26.5K posts
- 5. Michigan 55.8K posts
- 6. Nuss 5,063 posts
- 7. Indiana 48K posts
- 8. Billy Napier 1,849 posts
- 9. James Franklin 13.9K posts
- 10. Sherrone Moore 1,051 posts
- 11. Hugh Freeze 1,323 posts
- 12. Chad Patrick N/A
- 13. Diane Keaton 218K posts
- 14. Charles 107K posts
- 15. King Miller N/A
- 16. Oregon 69.4K posts
- 17. Andrew Vaughn 1,354 posts
- 18. #GoBlue 2,128 posts
- 19. Fickell 1,865 posts
- 20. Underwood 2,866 posts
Something went wrong.
Something went wrong.