#quantization search results
Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance. Paper: arxiv.org/pdf/2509.23500 [1/5]
![AshkboosSaleh's tweet image. Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance.
Paper: arxiv.org/pdf/2509.23500
[1/5]](https://pbs.twimg.com/media/G2FauvIWkAA_kTB.jpg)
#COLM2025 #LLM #Quantization #ReasoningModels #EfficientAI 🚀 Thrilled to introduce our recent work at COLM 2025: “Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models”, presented today at Poster Session 3, #74.

🚀 The 4-bit era has arrived! Meet #SVDQuant, our new W4A4 quantization paradigm for diffusion models. Now, 12B FLUX can run on a 16GB 4090 laptop without offloading—with 3x speedups over W4A16 models (like NF4) while maintaining top-tier image quality. #AI #Quantization. 1/7
Hey #techart, We all love to use the step function to get two distinct bands but what if you want more than 2 distinct bands? let's talk about #Quantization. Quantization/Posterization is all about mapping a continuous range of values to discrete values.

'BitNet: 1-bit Pre-training for Large Language Models', by Hongyu Wang et al. jmlr.org/papers/v26/24-… #bitnet #bitlinear #quantization
Some #color #quantization is too much. Switching the #material model helps: fewer bits and better results. Instead of a stochastic pick between #diffuse and #specular in #reflections, just store either specular or diffuse based on whether coating > 50%. #clearcoat #metalness…
I completed my second short course: Quantization Fundamentals with Hugging Face! Believe me, it is beneficial to complete a few short courses before diving deep into a specialization; it takes only 1-2 hours in a course. #ai #LLMs #quantization #huggingface #DeepLearning

Completed: Multimodal RAG: Chat with Videos! I feel, there is still a long way to go for video AI agents. learn.deeplearning.ai/accomplishment… #llm #rag #multimodal #lvlm #lanceDB
The whole weekend was full of pixels and dots 🟪🔵 What happened to you over the weekend? I wish you a nice Sunday evening and good night in advance. . #pixelart #piskelapp #quantization



On the Role of Discrete Representation in Sparse Mixture of Experts Giang Do, Kha Pham, Hung Le, Truyen Tran tmlr.infinite-conf.org/paper_pages/GT… #quantization #sparse #vqmoe

Mind-blown by the elegance of FP4 bit-packing. 🤯 Halve your AI model's size by packing two 4-bit floats into a single uint8. No native FP4 type needed. byte = low_fp4 | (high_fp4 << 4) Simple and powerful. #AI #LLM #Quantization #DeepLearning #Optimization
Quantization is widely used in data compression, digital image processing, and signal processing. Learn more: i.mtr.cool/arecnwqhxx #Quantization

Revolutionizing AI with QLoRA, new finetuning approach enabling a 65B model on a 48GB GPU, surpassing open-source models, and reaching 99.3% of ChatGPT's performance in just 24 hours of finetuning! 🚀💻📈 #AI #quantization andlukyane.com/blog/paper-rev… arxiv.org/abs/2305.14314

Read #NewPaper: "Soft Quantization Using Entropic Regularization" by Rajmadan Lakshmanan and Alois Pichler. See more details at: mdpi.com/1099-4300/25/1… #quantization #approximation of measures #entropicregularization

On the Role of Discrete Representation in Sparse Mixture of Experts Giang Do, Kha Pham, Hung Le, Truyen Tran. Action editor: Naigang Wang. openreview.net/forum?id=GTWKm… #quantization #sparse #vqmoe
How vendors cut computing requirements in half. #Quantization “reduces precision of the models” linkedin.com/posts/peter-go…

South Korean AI chip startup DeepX’s secret sauce is in its #quantization technology. eetimes.com/deepx-hints-at…

🚀 This weekend, I’m putting #Microsoft’s #Phi-4 model to the test—locally and for free! Opting for the FP16 #quantization for max precision. 🧮 Known for excelling in #math, it might shake things up. No tool calling yet, but let’s see! Watch for my findings. #AI #Phi4Model 🤓




Bitsandbytes unlocks accessible LLMs with INT8 and 4-bit quantization in PyTorch. Huge for memory-efficient training and inference. A must-know for scaling your LLM projects. #LLMs #Python #Quantization
Training Dynamics Impact Post-Training Quantization Robustness 👥 Albert Catalan-Tatjer, Niccolò Ajroldi & Jonas Geiping #AIResearch #MachineLearning #Quantization #NLP #DeepLearning 🔗 trendtoknow.ai

#Quantization is based on #chord structure and is a chord phenomenon. #physics,#quantum,#Theoreticalphysics,#stringtheory,#Quantumfield,#QuantumPhysics,#CERN,#painting,#music zenodo.org/records/172180…
Training Dynamics Impact Post-Training Quantization Robustness 👥 Albert Catalan-Tatjer, Niccolò Ajroldi & Jonas Geiping #AIResearch #MachineLearning #Quantization #NLP #DeepLearning 🔗 trendtoknow.ai

#COLM2025 #LLM #Quantization #ReasoningModels #EfficientAI 🚀 Thrilled to introduce our recent work at COLM 2025: “Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models”, presented today at Poster Session 3, #74.

Mind-blown by the elegance of FP4 bit-packing. 🤯 Halve your AI model's size by packing two 4-bit floats into a single uint8. No native FP4 type needed. byte = low_fp4 | (high_fp4 << 4) Simple and powerful. #AI #LLM #Quantization #DeepLearning #Optimization
Ever wondered how to make Large Language Models (LLMs) run faster and cheaper — without hurting performance? Let’s talk about #Quantization — the secret sauce behind efficient #LLM deployment 👇
On the Role of Discrete Representation in Sparse Mixture of Experts Giang Do, Kha Pham, Hung Le, Truyen Tran. Action editor: Naigang Wang. openreview.net/forum?id=GTWKm… #quantization #sparse #vqmoe
On the Role of Discrete Representation in Sparse Mixture of Experts Giang Do, Kha Pham, Hung Le, Truyen Tran tmlr.infinite-conf.org/paper_pages/GT… #quantization #sparse #vqmoe

.@Huawei unveils #SINQ, an open-source #quantization tech that slashes #LLM memory by 60-70%, enabling deployment on affordable hardware like consumer #GPUs. Efficiency unlocked. 💡 #AI #OpenSource #ML @VentureBeat @carlfranzen venturebeat.com/ai/huaweis-new…
pytorch-playground - Predefined PyTorch models on popular datasets for learning and benchmarking. #PyTorch #DeepLearning #Quantization

Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance. Paper: arxiv.org/pdf/2509.23500 [1/5]
![AshkboosSaleh's tweet image. Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance.
Paper: arxiv.org/pdf/2509.23500
[1/5]](https://pbs.twimg.com/media/G2FauvIWkAA_kTB.jpg)
SSTQ edges out OpenAI's MXFP4 with semantic-aware precision—10-20% better accuracy on key tasks! Complements xAI's distillation for 3x efficiency. Challenges GPT-OSS-120b, Grok-4; boosts LLaMA-2 13B, Mixtral 8x7B, Falcon 40B. OSS soon—DM for beta! #AI #LLM #Quantization 🔗
🚀 Unveiling SSTQ: Semantic-aware quantization slashing LLM inference costs by 80%! Unified sparsity, precision, & caching via novel math. OSS coming, enterprise beta open! DM for access. #AI #LLM #Quantization 🔗 zetareticula.com
Just implemented the Post Training Quantization on a model and reduced the size by 44% (32bit -> 8bit). Learned tons: PTQ vs QAT, symmetric vs asymmetric quantization, and how a full PTQ pipeline (CLE, AdaRound, bias correction, activation calibration) #Pytorch #Quantization
Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance. Paper: arxiv.org/pdf/2509.23500 [1/5]
![AshkboosSaleh's tweet image. Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance.
Paper: arxiv.org/pdf/2509.23500
[1/5]](https://pbs.twimg.com/media/G2FauvIWkAA_kTB.jpg)
Tchebichef Transforms for #Image #Compression Using Variable #Quantization #computerscience scirp.org/journal/PaperI…

#COLM2025 #LLM #Quantization #ReasoningModels #EfficientAI 🚀 Thrilled to introduce our recent work at COLM 2025: “Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models”, presented today at Poster Session 3, #74.

#Riemann #Complex-Surface Unified-#Quantization 'Replacing' #FeynmanPI ow.ly/Z2Pk3013vs8 #Mathematics #Physics


#Quantization of SpaceTime Based on a '#SpaceTime Interval Operator'! ow.ly/4n27IV #Mathematics #Physics

58% of companies are not optimizing their machine learning models, despite the performance gains techniques like #quantization and #pruning can offer. Why? @mjohnk11 has a theory (hint: it's hard!) and is excited to demo easy model optimization solutions at @odsc next week.

Book review: Nanomaterials, Vol. 2: Quantization and Entropy #DeGruyter #Nanomaterials #Quantization #Entropy Read more here: ow.ly/H5se50CzZgj

HQ-VAE: Hierarchical Discrete Representation Learning with Variational Bayes openreview.net/forum?id=1rowo… #autoencoder #quantization #autoencoding

arxiv.org/abs/2007.06919 fcos/retinanet에 대한 quantization. 늘 그렇듯(?) 배치놈과 씨름을 좀 하고 quantized 모델의 파인튜닝 과정을 개선. 공개되어있는 레포가 흥미로움. (github.com/blueardour/mod…) #quantization #detection



Want to boost your #AI model’s performance? The top techniques, like pruning, #quantization, and hyperparameter tuning, can make a big difference: helping you run models faster and tackle issues like model drift. Know more: bit.ly/3UV4YzW #AImodel #DeepLearning #ARTiBA

Home sweet home. Back to my cozy stuff for this winter vacation. #Quantization #QuantumMechanics #Polarization #Oscilliator

Quantization is widely used in data compression, digital image processing, and signal processing. Learn more: i.mtr.cool/arecnwqhxx #Quantization

الحمد لله، تم نشر بحثتنا بعنوان: Enabling Efficient Training of Convolutional Neural Networks for Histopathology Images #DeepLearning #Quantization #ComputationalPathology link.springer.com/chapter/10.100… وهنا ملخص البحث: youtu.be/vao1KQaktWo



Quantized LLMs have pros and cons. The advantages are smaller models, increased scalability, and faster inference. The main disadvantage is a potential loss of accuracy. #NLP #Quantization

📣Have you heard? Criteo is open-sourcing its automatic KNN indexing library. Get ready to build state-of-the-art indices with no effort! To know more, check our latest @Medium article 👉 tinyurl.com/3vceh9rd #Quantization #Faiss #knnindex #Python

Exploiting Latent Properties to Optimize Neural Codecs openreview.net/forum?id=Sv0FW… #codecs #decoding #quantization

Variation-aware Vision Transformer Quantization openreview.net/forum?id=yVyta… #cnns #cnn #quantization

Something went wrong.
Something went wrong.
United States Trends
- 1. Gabe Vincent 3,293 posts
- 2. #Blackhawks 1,986 posts
- 3. #AEWDynamite 17.8K posts
- 4. #stlblues 1,757 posts
- 5. Deport Harry Sisson 6,688 posts
- 6. Angel Reese 47.6K posts
- 7. #VSFashionShow 540K posts
- 8. tzuyu 219K posts
- 9. Deloitte 5,152 posts
- 10. DuPont 1,498 posts
- 11. Mavs 5,296 posts
- 12. #Survivor49 3,419 posts
- 13. Nazar 6,506 posts
- 14. Hofer 1,711 posts
- 15. Tusky 2,154 posts
- 16. jihyo 175K posts
- 17. Quen 30K posts
- 18. Birdman 4,873 posts
- 19. Suarez 17.7K posts
- 20. Darby 5,095 posts