vllm_project's profile picture. A high-throughput and memory-efficient inference and serving engine for LLMs. Join http://slack.vllm.ai to discuss together with the community!

vLLM

@vllm_project

A high-throughput and memory-efficient inference and serving engine for LLMs. Join http://slack.vllm.ai to discuss together with the community!

Low-bit LLM quantization doesn’t have to mean painful accuracy trade-offs or massive tuning runs. Intel's AutoRound PTQ algorithm is now integrated into LLM Compressor, producing W4A16 compressed-tensor checkpoints you can serve directly with vLLM across Intel Xeon, Gaudi, Arc…

vllm_project's tweet image. Low-bit LLM quantization doesn’t have to mean painful accuracy trade-offs or massive tuning runs. 

Intel's AutoRound PTQ algorithm is now integrated into  LLM Compressor, producing W4A16 compressed-tensor checkpoints you can serve directly with vLLM across Intel Xeon, Gaudi, Arc…

Congrats to the @MistralAI team on the launch of Devstral 2! 🚀 vLLM now delivers Day-0 support for the Devstral 2 Instruct models — optimized for agentic coding, deep codebase exploration, and multi-file editing at scale. Feel free to reach out 👇

vllm_project's tweet image. Congrats to the @MistralAI team on the launch of Devstral 2! 🚀

vLLM now delivers Day-0 support for the Devstral 2 Instruct models — optimized for agentic coding, deep codebase exploration, and multi-file editing at scale.

Feel free to reach out 👇

Introducing the Devstral 2 coding model family. Two sizes, both open source. Also, meet Mistral Vibe, a native CLI, enabling end-to-end automation. 🧵



🎉Congrats to the @Zai_org team on the launch of GLM-4.6V and GLM-4.6V-Flash — with day-0 serving support in vLLM Recipes for teams who want to run them on their own GPUs. GLM-4.6V focuses on high-quality multimodal reasoning with long context and native tool/function calling,…

vllm_project's tweet image. 🎉Congrats to the @Zai_org  team on the launch of GLM-4.6V and GLM-4.6V-Flash — with day-0 serving support in vLLM Recipes for teams who want to run them on their own GPUs.

GLM-4.6V focuses on high-quality multimodal reasoning with long context and native tool/function calling,…

GLM-4.6V Series is here🚀 - GLM-4.6V (106B): flagship vision-language model with 128K context - GLM-4.6V-Flash (9B): ultra-fast, lightweight version for local and low-latency workloads First-ever native Function Calling in the GLM vision model family Weights:…

Zai_org's tweet image. GLM-4.6V Series is here🚀

- GLM-4.6V (106B): flagship vision-language model with 128K context
- GLM-4.6V-Flash (9B): ultra-fast, lightweight version for local and low-latency workloads

First-ever native Function Calling in the GLM vision model family

Weights:…


vLLM 已轉發

Big news for AI builders. Ministral 3, DeepSeek-V3.2, and vLLM v0.12.0 are now available on Docker Model Runner! Run frontier-class, open-weights models with one command Read the announcement blog here: bit.ly/4a0vNvp


🚀 vLLM now offers an optimized inference recipe for DeepSeek-V3.2. ⚙️ Startup details Run vLLM with DeepSeek-specific components: --tokenizer-mode deepseek_v32 \ --tool-call-parser deepseek_v32 🧰 Usage tips Enable thinking mode in vLLM: –…

vllm_project's tweet image. 🚀 vLLM now offers an optimized inference recipe for DeepSeek-V3.2.

⚙️ Startup details
Run vLLM with DeepSeek-specific components:
--tokenizer-mode deepseek_v32 \
--tool-call-parser deepseek_v32

🧰 Usage tips
Enable thinking mode in vLLM:
–…

🚀 Launching DeepSeek-V3.2 & DeepSeek-V3.2-Speciale — Reasoning-first models built for agents! 🔹 DeepSeek-V3.2: Official successor to V3.2-Exp. Now live on App, Web & API. 🔹 DeepSeek-V3.2-Speciale: Pushing the boundaries of reasoning capabilities. API-only for now. 📄 Tech…

deepseek_ai's tweet image. 🚀 Launching DeepSeek-V3.2 & DeepSeek-V3.2-Speciale — Reasoning-first models built for agents!

🔹 DeepSeek-V3.2: Official successor to V3.2-Exp. Now live on App, Web & API.
🔹 DeepSeek-V3.2-Speciale: Pushing the boundaries of reasoning capabilities. API-only for now.

📄 Tech…


We’re taking CUDA debugging to the next level. 🚀 Building on our previous work with CUDA Core Dumps, we are releasing a new guide on tracing hanging and complicated kernels down to the source code. As kernels get more complex (deep inlining, async memory access), standard…

vllm_project's tweet image. We’re taking CUDA debugging to the next level. 🚀

Building on our previous work with CUDA Core Dumps, we are releasing a new guide on tracing hanging and complicated kernels down to the source code.

As kernels get more complex (deep inlining, async memory access), standard…

Have you ever felt you are developing cuda kernels and your tests often run into illegal memory access (IMA for short) and you have no idea how to debug? We have collaborated with the @nvidia team to investigate how cuda core dump can help, check out the blogpost to learn more!…



🤝 Proud to share the first production-ready vLLM plugin for Gaudi, developed in close collaboration with the Intel team and fully aligned with upstream vLLM. 🔧 This release is validated and ready for deployment, with support for the latest vLLM version coming soon. 📘 The…

vllm_project's tweet image. 🤝 Proud to share the first production-ready vLLM plugin for Gaudi, developed in close collaboration with the Intel team and fully aligned with upstream vLLM.

🔧 This release is validated and ready for deployment, with support for the latest vLLM version coming soon.
📘 The…

LLM agents are powerful but can be slow at scale. @Snowflake's model-free SuffixDecoding from Arctic Inference now runs natively in vLLM, beating tuned N-gram speculation across concurrency levels while keeping CPU and memory overhead in check. Quick Start in vLLM:…

Suffix Decoding is at #NeurIPS2025 as a 🏅spotlight! It accelerates LLM inference for coding, agents, and RL. We also optimized its speculation speed by 7.4x and merged it into vLLM (incoming to SGLang). Talk to @GabrieleOliaro or me at poster #816 Friday 11am! Links in🧵

aurickq's tweet image. Suffix Decoding is at #NeurIPS2025 as a 🏅spotlight! It accelerates LLM inference for coding, agents, and RL.

We also optimized its speculation speed by 7.4x and merged it into vLLM (incoming to SGLang).

Talk to @GabrieleOliaro or me at poster #816 Friday 11am!

Links in🧵


🎉 Congratulations to the Mistral team on launching the Mistral 3 family! We’re proud to share that @MistralAI, @NVIDIAAIDev, @RedHat_AI, and vLLM worked closely together to deliver full Day-0 support for the entire Mistral 3 lineup. This collaboration enabled: • NVFP4…

vllm_project's tweet image. 🎉 Congratulations to the Mistral team on launching the Mistral 3 family!

We’re proud to share that @MistralAI, @NVIDIAAIDev, @RedHat_AI, and vLLM worked closely together to deliver full Day-0 support for the entire Mistral 3 lineup.

This collaboration enabled:
• NVFP4…

Introducing the Mistral 3 family of models: Frontier intelligence at all sizes. Apache 2.0. Details in 🧵

MistralAI's tweet image. Introducing the Mistral 3 family of models: Frontier intelligence at all sizes. Apache 2.0. Details in 🧵


More inference workloads now mix autoregressive and diffusion models in a single pipeline to process and generate multiple modalities - text, image, audio, and video. Today we’re releasing vLLM-Omni: an open-source framework that extends vLLM’s easy, fast, and cost-efficient…


vLLM 已轉發

Transformers v5's first release candidate is out 🔥 The biggest release of my life. It's been five years since the last major (v4). From 20 architectures to 400, 20k daily downloads to 3 million. The release is huge, w/ tokenization (no slow tokenizers!), modeling & processing.

LysandreJik's tweet image. Transformers v5's first release candidate is out 🔥 The biggest release of my life.

It's been five years since the last major (v4). From 20 architectures to 400, 20k daily downloads to 3 million.

The release is huge, w/ tokenization (no slow tokenizers!), modeling & processing.

Love this: a community contributor built vLLM Playground to make inferencing visible, interactive, and experiment-friendly. From visual config toggles to automatic command generation, from GPU/M-chip support to GuideLLM benchmarking + LLMCompressor integration — it brings the…


vLLM is proud to support @PrimeIntellect 's post-training of the INTELLECT-3 model🥰

Introducing INTELLECT-3: Scaling RL to a 100B+ MoE model on our end-to-end stack Achieving state-of-the-art performance for its size across math, code and reasoning Built using the same tools we put in your hands, from environments & evals, RL frameworks, sandboxes & more



vLLM 已轉發

Interested in how NVIDIA Nemotron-H is being optimized for high performance inference in @vllm_project? Join @RedHat and @NVIDIAAI next week as we cover the Nemotron-H architecture, vLLM support, optimized MoE kernels, async scheduling, and new nsys profiles. Join links below 👇

RedHat_AI's tweet image. Interested in how NVIDIA Nemotron-H is being optimized for high performance inference in @vllm_project? Join @RedHat and @NVIDIAAI next week as we cover the Nemotron-H architecture, vLLM support, optimized MoE kernels, async scheduling, and new nsys profiles. Join links below 👇

United States 趨勢

Loading...

Something went wrong.


Something went wrong.