#quantization risultati di ricerca
Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance. Paper: arxiv.org/pdf/2509.23500 [1/5]
![AshkboosSaleh's tweet image. Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance.
Paper: arxiv.org/pdf/2509.23500
[1/5]](https://pbs.twimg.com/media/G2FauvIWkAA_kTB.jpg)
#COLM2025 #LLM #Quantization #ReasoningModels #EfficientAI 🚀 Thrilled to introduce our recent work at COLM 2025: “Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models”, presented today at Poster Session 3, #74.

Hey #techart, We all love to use the step function to get two distinct bands but what if you want more than 2 distinct bands? let's talk about #Quantization. Quantization/Posterization is all about mapping a continuous range of values to discrete values.

🚀 The 4-bit era has arrived! Meet #SVDQuant, our new W4A4 quantization paradigm for diffusion models. Now, 12B FLUX can run on a 16GB 4090 laptop without offloading—with 3x speedups over W4A16 models (like NF4) while maintaining top-tier image quality. #AI #Quantization. 1/7
Some #color #quantization is too much. Switching the #material model helps: fewer bits and better results. Instead of a stochastic pick between #diffuse and #specular in #reflections, just store either specular or diffuse based on whether coating > 50%. #clearcoat #metalness…
I completed my second short course: Quantization Fundamentals with Hugging Face! Believe me, it is beneficial to complete a few short courses before diving deep into a specialization; it takes only 1-2 hours in a course. #ai #LLMs #quantization #huggingface #DeepLearning

Completed: Multimodal RAG: Chat with Videos! I feel, there is still a long way to go for video AI agents. learn.deeplearning.ai/accomplishment… #llm #rag #multimodal #lvlm #lanceDB
The whole weekend was full of pixels and dots 🟪🔵 What happened to you over the weekend? I wish you a nice Sunday evening and good night in advance. . #pixelart #piskelapp #quantization



'BitNet: 1-bit Pre-training for Large Language Models', by Hongyu Wang et al. jmlr.org/papers/v26/24-… #bitnet #bitlinear #quantization
Read #NewPaper: "Soft Quantization Using Entropic Regularization" by Rajmadan Lakshmanan and Alois Pichler. See more details at: mdpi.com/1099-4300/25/1… #quantization #approximation of measures #entropicregularization

Mind-blown by the elegance of FP4 bit-packing. 🤯 Halve your AI model's size by packing two 4-bit floats into a single uint8. No native FP4 type needed. byte = low_fp4 | (high_fp4 << 4) Simple and powerful. #AI #LLM #Quantization #DeepLearning #Optimization
On the Role of Discrete Representation in Sparse Mixture of Experts Giang Do, Kha Pham, Hung Le, Truyen Tran tmlr.infinite-conf.org/paper_pages/GT… #quantization #sparse #vqmoe

Quantization is widely used in data compression, digital image processing, and signal processing. Learn more: i.mtr.cool/arecnwqhxx #Quantization

Revolutionizing AI with QLoRA, new finetuning approach enabling a 65B model on a 48GB GPU, surpassing open-source models, and reaching 99.3% of ChatGPT's performance in just 24 hours of finetuning! 🚀💻📈 #AI #quantization andlukyane.com/blog/paper-rev… arxiv.org/abs/2305.14314

On the Role of Discrete Representation in Sparse Mixture of Experts Giang Do, Kha Pham, Hung Le, Truyen Tran. Action editor: Naigang Wang. openreview.net/forum?id=GTWKm… #quantization #sparse #vqmoe
South Korean AI chip startup DeepX’s secret sauce is in its #quantization technology. eetimes.com/deepx-hints-at…

How vendors cut computing requirements in half. #Quantization “reduces precision of the models” linkedin.com/posts/peter-go…

Sharing my review on QA-LoRA: a game-changing algorithm optimizing LLMs for efficient deployment on edge devices without sacrificing accuracy! 🚀 #NLP #quantization andlukyane.com/blog/paper-rev… arxiv.org/abs/2309.14717

Training Dynamics Impact Post-Training Quantization Robustness 👥 Albert Catalan-Tatjer, Niccolò Ajroldi & Jonas Geiping #AIResearch #MachineLearning #Quantization #NLP #DeepLearning 🔗 trendtoknow.ai

#Quantization is based on #chord structure and is a chord phenomenon. #physics,#quantum,#Theoreticalphysics,#stringtheory,#Quantumfield,#QuantumPhysics,#CERN,#painting,#music zenodo.org/records/172180…
Training Dynamics Impact Post-Training Quantization Robustness 👥 Albert Catalan-Tatjer, Niccolò Ajroldi & Jonas Geiping #AIResearch #MachineLearning #Quantization #NLP #DeepLearning 🔗 trendtoknow.ai

#COLM2025 #LLM #Quantization #ReasoningModels #EfficientAI 🚀 Thrilled to introduce our recent work at COLM 2025: “Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models”, presented today at Poster Session 3, #74.

Mind-blown by the elegance of FP4 bit-packing. 🤯 Halve your AI model's size by packing two 4-bit floats into a single uint8. No native FP4 type needed. byte = low_fp4 | (high_fp4 << 4) Simple and powerful. #AI #LLM #Quantization #DeepLearning #Optimization
Ever wondered how to make Large Language Models (LLMs) run faster and cheaper — without hurting performance? Let’s talk about #Quantization — the secret sauce behind efficient #LLM deployment 👇
On the Role of Discrete Representation in Sparse Mixture of Experts Giang Do, Kha Pham, Hung Le, Truyen Tran. Action editor: Naigang Wang. openreview.net/forum?id=GTWKm… #quantization #sparse #vqmoe
On the Role of Discrete Representation in Sparse Mixture of Experts Giang Do, Kha Pham, Hung Le, Truyen Tran tmlr.infinite-conf.org/paper_pages/GT… #quantization #sparse #vqmoe

.@Huawei unveils #SINQ, an open-source #quantization tech that slashes #LLM memory by 60-70%, enabling deployment on affordable hardware like consumer #GPUs. Efficiency unlocked. 💡 #AI #OpenSource #ML @VentureBeat @carlfranzen venturebeat.com/ai/huaweis-new…
pytorch-playground - Predefined PyTorch models on popular datasets for learning and benchmarking. #PyTorch #DeepLearning #Quantization

Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance. Paper: arxiv.org/pdf/2509.23500 [1/5]
![AshkboosSaleh's tweet image. Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance.
Paper: arxiv.org/pdf/2509.23500
[1/5]](https://pbs.twimg.com/media/G2FauvIWkAA_kTB.jpg)
SSTQ edges out OpenAI's MXFP4 with semantic-aware precision—10-20% better accuracy on key tasks! Complements xAI's distillation for 3x efficiency. Challenges GPT-OSS-120b, Grok-4; boosts LLaMA-2 13B, Mixtral 8x7B, Falcon 40B. OSS soon—DM for beta! #AI #LLM #Quantization 🔗
🚀 Unveiling SSTQ: Semantic-aware quantization slashing LLM inference costs by 80%! Unified sparsity, precision, & caching via novel math. OSS coming, enterprise beta open! DM for access. #AI #LLM #Quantization 🔗 zetareticula.com
Just implemented the Post Training Quantization on a model and reduced the size by 44% (32bit -> 8bit). Learned tons: PTQ vs QAT, symmetric vs asymmetric quantization, and how a full PTQ pipeline (CLE, AdaRound, bias correction, activation calibration) #Pytorch #Quantization
AQUA-LLM finds that quantization alone boosts efficiency but reduces accuracy and robustness; combining quantization with fine-tuning recovers performance and adversarial resistance. #AQUA-LLM #LLM #quantization arxiv.org/html/2509.1351…
Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance. Paper: arxiv.org/pdf/2509.23500 [1/5]
![AshkboosSaleh's tweet image. Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance.
Paper: arxiv.org/pdf/2509.23500
[1/5]](https://pbs.twimg.com/media/G2FauvIWkAA_kTB.jpg)
#COLM2025 #LLM #Quantization #ReasoningModels #EfficientAI 🚀 Thrilled to introduce our recent work at COLM 2025: “Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models”, presented today at Poster Session 3, #74.

Tchebichef Transforms for #Image #Compression Using Variable #Quantization #computerscience scirp.org/journal/PaperI…

#Riemann #Complex-Surface Unified-#Quantization 'Replacing' #FeynmanPI ow.ly/Z2Pk3013vs8 #Mathematics #Physics


#Quantization of SpaceTime Based on a '#SpaceTime Interval Operator'! ow.ly/4n27IV #Mathematics #Physics

Hey #techart, We all love to use the step function to get two distinct bands but what if you want more than 2 distinct bands? let's talk about #Quantization. Quantization/Posterization is all about mapping a continuous range of values to discrete values.

58% of companies are not optimizing their machine learning models, despite the performance gains techniques like #quantization and #pruning can offer. Why? @mjohnk11 has a theory (hint: it's hard!) and is excited to demo easy model optimization solutions at @odsc next week.

Book review: Nanomaterials, Vol. 2: Quantization and Entropy #DeGruyter #Nanomaterials #Quantization #Entropy Read more here: ow.ly/H5se50CzZgj

arxiv.org/abs/2007.06919 fcos/retinanet에 대한 quantization. 늘 그렇듯(?) 배치놈과 씨름을 좀 하고 quantized 모델의 파인튜닝 과정을 개선. 공개되어있는 레포가 흥미로움. (github.com/blueardour/mod…) #quantization #detection



HQ-VAE: Hierarchical Discrete Representation Learning with Variational Bayes openreview.net/forum?id=1rowo… #autoencoder #quantization #autoencoding

Home sweet home. Back to my cozy stuff for this winter vacation. #Quantization #QuantumMechanics #Polarization #Oscilliator

📣Have you heard? Criteo is open-sourcing its automatic KNN indexing library. Get ready to build state-of-the-art indices with no effort! To know more, check our latest @Medium article 👉 tinyurl.com/3vceh9rd #Quantization #Faiss #knnindex #Python

Variation-aware Vision Transformer Quantization openreview.net/forum?id=yVyta… #cnns #cnn #quantization

Want to boost your #AI model’s performance? The top techniques, like pruning, #quantization, and hyperparameter tuning, can make a big difference: helping you run models faster and tackle issues like model drift. Know more: bit.ly/3UV4YzW #AImodel #DeepLearning #ARTiBA

الحمد لله، تم نشر بحثتنا بعنوان: Enabling Efficient Training of Convolutional Neural Networks for Histopathology Images #DeepLearning #Quantization #ComputationalPathology link.springer.com/chapter/10.100… وهنا ملخص البحث: youtu.be/vao1KQaktWo



#Landau #quantization of nearly #degenerate bands and full symmetry classification of Landau level crossings #physics #EdSugg #science #condmat @APSPhysics go.aps.org/2YzuQWu

#mdpientropy Variations à la Fourier-Weyl-Wigner on #Quantizations of the Plane and the Half-Plane mdpi.com/1099-4300/20/1… @Entropy_MDPI #quantization #Wignerfunction #Fouriertransform

Something went wrong.
Something went wrong.
United States Trends
- 1. Bears 89.3K posts
- 2. Jake Moody 13.7K posts
- 3. Snell 24.3K posts
- 4. Caleb 49.2K posts
- 5. Falcons 51.3K posts
- 6. Bills 141K posts
- 7. Josh Allen 26.6K posts
- 8. Jayden 22.9K posts
- 9. #BearDown 2,369 posts
- 10. phil 175K posts
- 11. Swift 290K posts
- 12. Happy Birthday Charlie 8,474 posts
- 13. Ben Johnson 4,433 posts
- 14. Joji 29.7K posts
- 15. #Dodgers 15.3K posts
- 16. Turang 4,321 posts
- 17. Troy Aikman 6,517 posts
- 18. Roki 6,101 posts
- 19. Bijan 32.9K posts
- 20. Brewers 48.5K posts