Zihao Ye
@ye_combinator
惑 业 苦
Potrebbero piacerti
🚀 tilelang now fully embraces tvm-ffi! 💡 Not only is the compiler deeply powered by tvm_ffi, we've also replaced old pybind parts with tvm_ffi too. ⚙️ With host codegen moving attribute checks from Python → C++, CPU overhead dropped 2.1×–3.8×, compile speed boosted 2.1×–3.3×!
Wrote a blog post on why collective communication feels awkward for newer LLM workloads (disaggregated inference, RL weight update, MoE), why people don’t just use raw RDMA, how we approached it, and some behind-the-scenes stories. le.qun.ch/en/blog/2025/1…
The deadline for each kernel Kernel #1 - NVFP4 Batched GEMV: Nov 10th - Nov 28th Kernel #2 - NVFP4 GEMM: Nov 29th - Dec 19th Kernel #3 - NVFP4 Gated Dual GEMM: Dec 20th - Jan 16th Kernel #4 - NVFP4 Grouped GEMM: Jan 17th - Feb 13th
Excited to see TVM-FFI finally become a standalone module that can benefit many projects across the deep learning ecosystem! Seven years ago, I was working on @DGLGraph - a graph learning framework with multiple backend support (PyTorch, MXNet, TensorFlow) – alongside talented…
📢Excited to introduce Apache TVM FFI, an open ABI and FFI for ML systems, enabling compilers, libraries, DSLs, and frameworks to naturally interop with each other. Ship one library across pytorch, jax, cupy etc and runnable across python, c++, rust tvm.apache.org/2025/10/21/tvm…
On-policy + Reverse KLD = MiniLLM (arxiv.org/abs/2306.08543). Really nice blog by @thinkymachines. Exciting to see it being offered as a service!
Our latest post explores on-policy distillation, a training approach that unites the error-correcting relevance of RL with the reward density of SFT. When training it for math reasoning and as an internal chat assistant, we find that on-policy distillation can outperform other…
A small thread about how you should be drawing the contents of higher dimensional tensors
🤔 Can AI optimize the systems it runs on? 🚀 Introducing FlashInfer-Bench, a workflow that makes AI systems self-improving with agents: - Standardized signature for LLM serving kernels - Implement kernels with your preferred language - Benchmark them against real-world serving…
How to beat all compression using LLMs? ⚙️ Introducing LLMc — a lossless compressor built with LLMs. LLMc leverages the predictive power of LLMs to beat traditional compressors like Gzip and LZMA on natural language text. (1/4) 🔗 Blog Post: syfi.cs.washington.edu/blog/2025-10-0… 💻 Code:…
🚀 Follow-up to our last breakthrough on DeepSeek V3/R1 inference! On NVIDIA GB200 NVL72, SGLang now achieves 26k input tokens/s and 13k output tokens/s per GPU with FP8 attention + NVFP4 MoE - that’s a 3.8× / 4.8× speedup vs H100 settings. See the details in the 🧵 (1/4)
🚀 Introducing Sparse VideoGen2 (SVG2) — Pareto-frontier video generation acceleration with semantic-aware sparse attention! 🏆Spotlight paper accepted by #NeurIPS2025 ✅ Training-free & plug-and-play ✅ Up to 2.5× faster on HunyuanVideo, 1.9× faster on Wan 2.1 ✅ SOTA quality…
The main branch of sglang now supports deterministic inference with user-specified per-request seeds! It utilized kernels from @thinkymachines and introduced new optimizations & coverage. Run out of box for most hardware backends and pytorch versions.
SGLang now supports deterministic LLM inference! Building on @thinkymachines batch-invariant kernels, we integrated deterministic attention & sampling ops into a high-throughput engine - fully compatible with chunked prefill, CUDA graphs, radix cache, and non-greedy sampling. ✅…
Glad that you enjoyed it! To be precise, it's EP64 on the inference side, around 30GB per inference GPU. So it's around 30GB / 1.3s = 23 GB/s.
Excited to share what friends and I have been working on at @Standard_Kernel We've raised from General Catalyst (@generalcatalyst), Felicis (@felicis), and a group of exceptional angels. We have some great H100 BF16 kernels in pure CUDA+PTX, featuring: - Matmul 102%-105% perf…
At Thinking Machines, our work includes collaborating with the broader research community. Today we are excited to share that we are building a vLLM team at @thinkymachines to advance open-source vLLM and serve frontier models. If you are interested, please DM me or @barret_zoph!…
Awesome work from @thinkymachines and @cHHillee! The importance of determinism might be underestimated. Like with LLM-based compression (bellard.org/ts_zip/) - you really need things to work the same way whether you're doing prefill/decode or different batching setups. Here's…
Today Thinking Machines Lab is launching our research blog, Connectionism. Our first blog post is “Defeating Nondeterminism in LLM Inference” We believe that science is better when shared. Connectionism will cover topics as varied as our research is: from kernel numerics to…
This is the advantage of large nvlink domains or TPUs topology - the main reason to do PP is that you are bottlenecked on your DP comms and cannot scale TP further. But if you have high enough bandwidth across a large enough domain (like TPUs or NVL72), you don't need to do PP…
🚀 Presenting LiteASR: a method that halves the compute cost of speech encoders by 2x, leveraging low-rank approximation of activations. LiteASR is accepted to #EMNLP2025 (main) @emnlpmeeting
Sub-10-microsecond Haskell Sudoku solver implemented in hardware. unsafeperform.io/papers/2025-hs…
United States Tendenze
- 1. Cheney 49.2K posts
- 2. #ExpediaChat N/A
- 3. First Take 43K posts
- 4. Nano Banana Pro 16.6K posts
- 5. Cam Newton 2,604 posts
- 6. Sedition 94.7K posts
- 7. Stephen A 37.9K posts
- 8. Treason 60.2K posts
- 9. #AcousticPianoSnowGlobe 2,049 posts
- 10. Trump and Vance 30.1K posts
- 11. SEDITIOUS BEHAVIOR 16.3K posts
- 12. #LoveDesignFinalEP 371K posts
- 13. Bush 54.2K posts
- 14. FINAL DRAFT FINAL LOVE 406K posts
- 15. Eddie Hennessy N/A
- 16. #XboxPartnerPreview 2,395 posts
- 17. Godzilla 21K posts
- 18. Husqvarna 1,230 posts
- 19. #WeekndTourLeaks 1,217 posts
- 20. UNLAWFUL 54.5K posts
Potrebbero piacerti
-
Lianmin Zheng
@lm_zheng -
Zhuohan Li
@zhuohan123 -
ビクトリード
@RealVictrid -
Dacheng Li
@DachengLi177 -
Ryan Hanrui Wang
@hanrui_w -
Sixian
@noworkforsixian -
Dr. Jian "dAIye" Weng
@b1antaidaye -
Junru Shao
@junrushao -
Ying Sheng
@ying11231 -
Muyang Li
@lmxyy1999 -
Zhanghao Wu
@Michaelvll1 -
Yihong Zhang
@yihongz_bot -
Jiawei Liu
@JiaweiLiu_ -
Ligeng Zhu
@LigengZhu -
Apache TVM
@ApacheTVM
Something went wrong.
Something went wrong.