
Sparsity in LLMs Workshop at ICLR 2025
@sparseLLMs
Workshop on Sparsity in LLMs: Deep Dive into Mixture of Experts, Quantization, Hardware, and Inference @iclr_conf 2025.
We are happy to announce that the Workshop on Sparsity in LLMs will take place @iclr_conf in Singapore! For details: sparsellm.org Organizers: @TianlongChen4, @utkuevci, @yanii, @BerivanISIK, @Shiwei_Liu66, @adnan_ahmad1306, @alexinowak

@AggieInCA giving the first Oral talk at #ICLR2025 @sparseLLMs workshop

Presenting in short! 👉🏼 Mol-MoE: leveraging model merging and RLHF for test-time steering of molecular properties. 📆 today, 11:15am to 12:15pm 📍 Poster session #1, GEM Bio Workshop @gembioworkshop @sparseLLMs #ICLR #ICLR2025
In Mol-MoE, we propose a framework to train router networks to reuse property-specific molecule generators. This allows to personalize drug generation at test time by following property preferences! We discuss some challenges. @proceduralia @pierrelux arxiv.org/abs/2502.05633
@black_samorez giving his oral talk at the #ICLR2025 @sparseLLMs workshop


We are presenting “Prefix and output length-aware scheduling for efficient online LLM inference” at the ICLR 2025 (@iclr_conf) Sparsity in LLMs workshop (@sparseLLMs). 🪫 Challenge: LLM inference in data centers benefits from data parallelism. How can we exploit patterns in…

@tydsh giving his invited talk at the #ICLR2025 @sparseLLMs workshop!



our workshop on sparsity in LLMs is starting soon in Hall 4.7! we’re starting strong with an invited talk from @DAlistarh and an exciting oral on scaling laws for MoEs!

@DAlistarh giving his invited talk at the #ICLR2025 @sparseLLMs workshop

Our ICLR 2025 Workshop on Sparsity in LLMs (@sparseLLMs) kicks off with a talk by @DAlistarh on lossless (~1% perf drop) LLM compression using quantization across various benchmarks.

First poster session at the #ICLR2025 @sparseLLMs workshop




a PACKED hall for @tydsh‘s talk at our sparsity in LLMs workshop -not surprising! we have another oral right after this, and then we’ll have the first of 2 poster sessions before lunch! @iclr_conf

@DAlistarh giving his invited talk at the #ICLR2025 @sparseLLMs workshop now!

#ICLR2025 @sparseLLMs Workshop Panel with @PavloMolchanov @ayazdanb @DAlistarh @realDanFu @YangYou1991 and Olivia Hsu, moderated by @PandaAshwinee



If you’re at #ICLR2025, go watch @AggieInCA give an oral presentation at the @SparseLLMs workshop on scaling laws for pretraining MoE LMs! Had a great time co-leading this project with @samira_abnar & @AggieInCA at Apple MLR last summer. When: Sun Apr 27, 9:30a Where: Hall 4-07
🚨 One question that has always intrigued me is the role of different ways to increase a model's capacity: parameters, parallelizable compute, or sequential compute? We explored this through the lens of MoEs:

Sparse LLM workshop will run on Sunday with two poster sessions, a mentoring session, 4 spotlight talks, 4 invited talks and a panel session. We'll host an amazing lineup of researchers: @DAlistarh @vithursant19 @tydsh @ayazdanb @gkdziugaite Olivia Hsu @PavloMolchanov Yang Yu

I will travelling to Singapore 🇸🇬 this week for the ICLR 2025 Workshop on Sparsity in LLMs (SLLM) that I'm co-organizing! We have an exciting lineup of invited speakers and panelists including @DAlistarh, @gkdziugaite, @PavloMolchanov, @vithursant19, @tydsh and @ayazdanb.
Check out this post that has information about research from Apple that will be presented at ICLR 2025 in 🇸🇬 this week. I will be at ICLR and will be presenting some of our work (led by @samira_abnar) at SLLM @sparseLLMs workshop. Happy to chat about JEPAs as well!
New post: "Apple Machine Learning Research at @iclr_conf 2025" - highlighting a selection of the many Apple #ML research papers to be presented at the conference this week: machinelearning.apple.com/research/iclr-…
Our QuEST paper was selected for Oral Presentation at ICLR @sparseLLMs workshop! QuEST is the first algorithm with Pareto-optimal LLM training for 4bit weights/activations, and can even train accurate 1-bit LLMs. Paper: arxiv.org/abs/2502.05003 Code: github.com/IST-DASLab/QuE…
github.com
GitHub - IST-DASLab/QuEST: Work in progress.
Work in progress. Contribute to IST-DASLab/QuEST development by creating an account on GitHub.
United States Trends
- 1. D’Angelo 201K posts
- 2. D’Angelo 201K posts
- 3. Brown Sugar 16.6K posts
- 4. Black Messiah 7,820 posts
- 5. Voodoo 16.2K posts
- 6. #PortfolioDay 10.5K posts
- 7. Young Republicans 4,878 posts
- 8. Powell 38.3K posts
- 9. Happy Birthday Charlie 123K posts
- 10. How Does It Feel 7,490 posts
- 11. Pentagon 99.7K posts
- 12. Osimhen 133K posts
- 13. Alex Jones 28.6K posts
- 14. #BornOfStarlightHeeseung 82.7K posts
- 15. Neo-Soul 18.7K posts
- 16. CJGJ N/A
- 17. VPNs 1,113 posts
- 18. Sandy Hook 11.5K posts
- 19. Untitled 6,529 posts
- 20. Jill Scott 1,541 posts
Something went wrong.
Something went wrong.