LiquidAI_'s profile picture. Build efficient general-purpose AI at every scale.

Liquid AI

@LiquidAI_

Build efficient general-purpose AI at every scale.

置頂

Introducing Liquid Nanos ⚛️ — a new family of extremely tiny task-specific models that deliver GPT-4o-class performance while running directly on phones, laptops, cars, embedded devices, and GPUs with the lowest latency and fastest generation speed. > model size: 350M to 2.6B >…

LiquidAI_'s tweet image. Introducing Liquid Nanos ⚛️ — a new family of extremely tiny task-specific models that deliver GPT-4o-class performance while running directly on phones, laptops, cars, embedded devices, and GPUs with the lowest latency and fastest generation speed.

> model size: 350M to 2.6B
>…

Liquid AI 已轉發

I added LFM 2 8B A1B in @LocallyAIApp for iPhone 17 Pro and iPhone Air The first mixture of experts model by @LiquidAI_, 8B total parameters (1B active), performance similar to 3-4B models but speed of a 1B model Runs great on the 17 Pro with Apple MLX

adrgrondin's tweet image. I added LFM 2 8B A1B in @LocallyAIApp for iPhone 17 Pro and iPhone Air

The first mixture of experts model by @LiquidAI_, 8B total parameters (1B active), performance similar to 3-4B models but speed of a 1B model

Runs great on the 17 Pro with Apple MLX

Liquid AI 已轉發

Meet LFM2-8B-A1B by @LiquidAI_ - 8B total and 1B active params 🐘 - 5x faster on CPUs and GPUs ⚡️ - Perfect for fast, private, edge 📱/💻/🚗/🤖

xanamini's tweet image. Meet LFM2-8B-A1B by @LiquidAI_
- 8B total and 1B active params 🐘
- 5x faster on CPUs and GPUs ⚡️
- Perfect for fast, private, edge 📱/💻/🚗/🤖
xanamini's tweet image. Meet LFM2-8B-A1B by @LiquidAI_
- 8B total and 1B active params 🐘
- 5x faster on CPUs and GPUs ⚡️
- Perfect for fast, private, edge 📱/💻/🚗/🤖

Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

LiquidAI_'s tweet image. Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘

> LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed.
> Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs.
> Quantized variants fit comfortably on…


Liquid AI 已轉發

Enjoy our even better on-device model! 🐘 Running on @amd AI PCs with the fastest inference profile!

Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

LiquidAI_'s tweet image. Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘

> LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed.
> Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs.
> Quantized variants fit comfortably on…


Liquid AI 已轉發

Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

LiquidAI_'s tweet image. Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘

> LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed.
> Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs.
> Quantized variants fit comfortably on…

Liquid AI 已轉發

The last 90 days we shipped hard at @LiquidAI_.🚢 🐘 LFM2 tiny instances. fastest on-device models 350M, 700M, 1.2B with a flagship new architecture. 🐸 LEAP. our device ai platform, from use-case to model deployment on phones and laptops in 5min. 👁️ LFM2 Vision language…

ramin_m_h's tweet image. The last 90 days we shipped hard at @LiquidAI_.🚢 

🐘 LFM2 tiny instances. fastest on-device models 350M, 700M, 1.2B with a flagship new architecture.

🐸 LEAP. our device ai platform, from use-case to model deployment on phones and laptops in 5min.

👁️ LFM2 Vision language…

Today we are broadening access to local AI with the launch of Apollo on Android. The @apolloaiapp is our low-latency cloud-free “playground in your pocket” that allows users to instantly access fast, effective AI - without sacrificing privacy or security. Together, Apollo and…

🔉🤖 The announcement you’ve been waiting for is here: Apollo is available on Android! Now you can easily access all the local, secure AI technology you’ve loved on iOS from whichever phone is in your pocket. Apollo’s low-latency, cloud-free platform and library of small models…

apolloaiapp's tweet image. 🔉🤖 The announcement you’ve been waiting for is here: Apollo is available on Android!

Now you can easily access all the local, secure AI technology you’ve loved on iOS from whichever phone is in your pocket.

Apollo’s low-latency, cloud-free platform and library of small models…


Liquid AI 已轉發

It seems @LiquidAI_ is the new western AI pride? Trending ^^

FernandoNetoAi's tweet image. It seems @LiquidAI_ is the new western AI pride? Trending ^^

Liquid AI 已轉發

Pushing chinchilla scaling laws for multimodal models with our new line of omni Liquid foundation models! Tech > s2s + TTS + ASR, all in one model! > below 100ms latency > over 10x faster inference > based on our efficient LFM v2 > 56.8 VoiceBench score > fast, privare,…

ramin_m_h's tweet image. Pushing chinchilla scaling laws for multimodal models with our new line of omni Liquid foundation models!

Tech
> s2s + TTS + ASR, all in one model! 
> below 100ms latency
> over 10x faster inference
> based on our efficient LFM v2
> 56.8 VoiceBench score
> fast, privare,…

Liquid AI 已轉發

Coming to MLX-Audio 🚀🔥

LFM2-Audio just dropped! It's a 1.5B model that understands and generates both text and audio Inference 10x faster + quality on par with models 10x larger Available today on @huggingface and our playground 🥳



Liquid AI 已轉發

Semana muy movida con los LLMs, sonnet 4.5, DeepSeek 3.2, GLM 4.6 y en cualquier momento sale Gemini 3 Pero personalmente me parece mas relevante el avance de los SLM, acaban de sacar LFM2 de @LiquidAI_ Entrada audio +texto, Salida audio+texto Con solo 1.5B de params, puede…

JohnGalt_is_www's tweet image. Semana muy movida con los LLMs, sonnet 4.5, DeepSeek 3.2, GLM 4.6 y en cualquier momento sale Gemini 3

Pero personalmente me parece mas relevante el avance de los SLM, acaban de sacar LFM2 de @LiquidAI_ 

Entrada audio +texto, Salida audio+texto
Con solo 1.5B de params, puede…

Liquid AI 已轉發

LFM2-Audio just dropped! It's a 1.5B model that understands and generates both text and audio Inference 10x faster + quality on par with models 10x larger Available today on @huggingface and our playground 🥳


Liquid AI 已轉發

Today, we expand our LFM2 family to audio. 👂👄 LFM2-Audio is an end-to-end audio-text omni foundation model, and delivers responsive, real-time conversation on-device at just 1.5B parameters. One model. Seamless multimodal support. No chains. > Speech-to-speech >…


United States 趨勢

Loading...

Something went wrong.


Something went wrong.