#quantization risultati di ricerca

Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance. Paper: arxiv.org/pdf/2509.23500 [1/5]

AshkboosSaleh's tweet image. Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance.

Paper: arxiv.org/pdf/2509.23500

[1/5]

#COLM2025 #LLM #Quantization #ReasoningModels #EfficientAI 🚀 Thrilled to introduce our recent work at COLM 2025: “Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models”, presented today at Poster Session 3, #74.

haolibai's tweet image. #COLM2025 #LLM #Quantization #ReasoningModels #EfficientAI 

🚀 Thrilled to introduce our recent work at COLM 2025: “Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models”, presented today at Poster Session 3, #74.

Hey #techart, We all love to use the step function to get two distinct bands but what if you want more than 2 distinct bands? let's talk about #Quantization. Quantization/Posterization is all about mapping a continuous range of values to discrete values.

LayneStokes's tweet image. Hey #techart, 

We all love to use the step function to get two distinct bands but what if you want more than 2 distinct bands?

let's talk about #Quantization.

Quantization/Posterization is all about mapping a continuous range of values to discrete values.

🚀 The 4-bit era has arrived! Meet #SVDQuant, our new W4A4 quantization paradigm for diffusion models. Now, 12B FLUX can run on a 16GB 4090 laptop without offloading—with 3x speedups over W4A16 models (like NF4) while maintaining top-tier image quality.  #AI #Quantization. 1/7


Some #color #quantization is too much. Switching the #material model helps: fewer bits and better results. Instead of a stochastic pick between #diffuse and #specular in #reflections, just store either specular or diffuse based on whether coating > 50%. #clearcoat #metalness


I completed my second short course: Quantization Fundamentals with Hugging Face! Believe me, it is beneficial to complete a few short courses before diving deep into a specialization; it takes only 1-2 hours in a course. #ai #LLMs #quantization #huggingface #DeepLearning

devilalsharma's tweet image. I completed my second short course: Quantization Fundamentals with Hugging Face!

Believe me, it is beneficial to complete a few short courses before diving deep into a specialization; it takes only 1-2 hours in a course. 

#ai #LLMs #quantization #huggingface #DeepLearning

Completed: Multimodal RAG: Chat with Videos! I feel, there is still a long way to go for video AI agents. learn.deeplearning.ai/accomplishment… #llm #rag #multimodal #lvlm #lanceDB



The whole weekend was full of pixels and dots 🟪🔵 What happened to you over the weekend? I wish you a nice Sunday evening and good night in advance. . #pixelart #piskelapp #quantization

numanu107's tweet image. The whole weekend was full of pixels and dots 🟪🔵 What happened to you over the weekend? I wish you a nice Sunday evening and good night in advance. 
.
#pixelart #piskelapp #quantization
numanu107's tweet image. The whole weekend was full of pixels and dots 🟪🔵 What happened to you over the weekend? I wish you a nice Sunday evening and good night in advance. 
.
#pixelart #piskelapp #quantization
numanu107's tweet image. The whole weekend was full of pixels and dots 🟪🔵 What happened to you over the weekend? I wish you a nice Sunday evening and good night in advance. 
.
#pixelart #piskelapp #quantization

'BitNet: 1-bit Pre-training for Large Language Models', by Hongyu Wang et al. jmlr.org/papers/v26/24-… #bitnet #bitlinear #quantization


Read #NewPaper: "Soft Quantization Using Entropic Regularization" by Rajmadan Lakshmanan and Alois Pichler. See more details at: mdpi.com/1099-4300/25/1… #quantization #approximation of measures #entropicregularization

Entropy_MDPI's tweet image. Read #NewPaper: "Soft Quantization Using Entropic Regularization" by Rajmadan Lakshmanan and Alois Pichler. See more details at: mdpi.com/1099-4300/25/1…

#quantization
#approximation of measures
#entropicregularization

Mind-blown by the elegance of FP4 bit-packing. 🤯 Halve your AI model's size by packing two 4-bit floats into a single uint8. No native FP4 type needed. byte = low_fp4 | (high_fp4 << 4) Simple and powerful. #AI #LLM #Quantization #DeepLearning #Optimization


On the Role of Discrete Representation in Sparse Mixture of Experts Giang Do, Kha Pham, Hung Le, Truyen Tran tmlr.infinite-conf.org/paper_pages/GT… #quantization #sparse #vqmoe

TmlrVideos's tweet image. On the Role of Discrete Representation in Sparse Mixture of Experts

Giang Do, Kha Pham, Hung Le, Truyen Tran

tmlr.infinite-conf.org/paper_pages/GT…

#quantization #sparse #vqmoe

Quantization is widely used in data compression, digital image processing, and signal processing. Learn more: i.mtr.cool/arecnwqhxx #Quantization

techopedia's tweet image. Quantization is widely used in data compression, digital image processing, and signal processing.
Learn more: i.mtr.cool/arecnwqhxx

#Quantization

Revolutionizing AI with QLoRA, new finetuning approach enabling a 65B model on a 48GB GPU, surpassing open-source models, and reaching 99.3% of ChatGPT's performance in just 24 hours of finetuning! 🚀💻📈 #AI #quantization andlukyane.com/blog/paper-rev… arxiv.org/abs/2305.14314

AndLukyane's tweet image. Revolutionizing AI with QLoRA, new finetuning approach enabling a 65B model on a 48GB GPU, surpassing open-source models, and reaching 99.3% of ChatGPT&apos;s performance in just 24 hours of finetuning! 🚀💻📈 #AI #quantization
andlukyane.com/blog/paper-rev…
arxiv.org/abs/2305.14314

On the Role of Discrete Representation in Sparse Mixture of Experts Giang Do, Kha Pham, Hung Le, Truyen Tran. Action editor: Naigang Wang. openreview.net/forum?id=GTWKm… #quantization #sparse #vqmoe


South Korean AI chip startup DeepX’s secret sauce is in its #quantization technology. eetimes.com/deepx-hints-at…

eetimes's tweet image. South Korean AI chip startup DeepX’s secret sauce is in its #quantization technology. eetimes.com/deepx-hints-at…

How vendors cut computing requirements in half. #Quantization “reduces precision of the models” linkedin.com/posts/peter-go…

Glenn_Mallo's tweet image. How vendors cut computing requirements in half.

#Quantization
“reduces precision of the models”
linkedin.com/posts/peter-go…

Sharing my review on QA-LoRA: a game-changing algorithm optimizing LLMs for efficient deployment on edge devices without sacrificing accuracy! 🚀 #NLP #quantization andlukyane.com/blog/paper-rev… arxiv.org/abs/2309.14717

AndLukyane's tweet image. Sharing my review on QA-LoRA: a game-changing algorithm optimizing LLMs for efficient deployment on edge devices without sacrificing accuracy! 🚀 #NLP #quantization 
andlukyane.com/blog/paper-rev…
arxiv.org/abs/2309.14717

Training Dynamics Impact Post-Training Quantization Robustness 👥 Albert Catalan-Tatjer, Niccolò Ajroldi & Jonas Geiping #AIResearch #MachineLearning #Quantization #NLP #DeepLearning 🔗 trendtoknow.ai

aitrendings's tweet image. Training Dynamics Impact Post-Training Quantization Robustness

👥 Albert Catalan-Tatjer, Niccolò Ajroldi &amp;amp; Jonas Geiping

#AIResearch #MachineLearning #Quantization #NLP #DeepLearning

🔗 trendtoknow.ai

Training Dynamics Impact Post-Training Quantization Robustness 👥 Albert Catalan-Tatjer, Niccolò Ajroldi & Jonas Geiping #AIResearch #MachineLearning #Quantization #NLP #DeepLearning 🔗 trendtoknow.ai

aitrendings's tweet image. Training Dynamics Impact Post-Training Quantization Robustness

👥 Albert Catalan-Tatjer, Niccolò Ajroldi &amp;amp; Jonas Geiping

#AIResearch #MachineLearning #Quantization #NLP #DeepLearning

🔗 trendtoknow.ai

#COLM2025 #LLM #Quantization #ReasoningModels #EfficientAI 🚀 Thrilled to introduce our recent work at COLM 2025: “Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models”, presented today at Poster Session 3, #74.

haolibai's tweet image. #COLM2025 #LLM #Quantization #ReasoningModels #EfficientAI 

🚀 Thrilled to introduce our recent work at COLM 2025: “Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models”, presented today at Poster Session 3, #74.

Mind-blown by the elegance of FP4 bit-packing. 🤯 Halve your AI model's size by packing two 4-bit floats into a single uint8. No native FP4 type needed. byte = low_fp4 | (high_fp4 << 4) Simple and powerful. #AI #LLM #Quantization #DeepLearning #Optimization


Ever wondered how to make Large Language Models (LLMs) run faster and cheaper — without hurting performance? Let’s talk about #Quantization — the secret sauce behind efficient #LLM deployment 👇


On the Role of Discrete Representation in Sparse Mixture of Experts Giang Do, Kha Pham, Hung Le, Truyen Tran. Action editor: Naigang Wang. openreview.net/forum?id=GTWKm… #quantization #sparse #vqmoe


On the Role of Discrete Representation in Sparse Mixture of Experts Giang Do, Kha Pham, Hung Le, Truyen Tran tmlr.infinite-conf.org/paper_pages/GT… #quantization #sparse #vqmoe

TmlrVideos's tweet image. On the Role of Discrete Representation in Sparse Mixture of Experts

Giang Do, Kha Pham, Hung Le, Truyen Tran

tmlr.infinite-conf.org/paper_pages/GT…

#quantization #sparse #vqmoe

.@Huawei unveils #SINQ, an open-source #quantization tech that slashes #LLM memory by 60-70%, enabling deployment on affordable hardware like consumer #GPUs. Efficiency unlocked. 💡 #AI #OpenSource #ML @VentureBeat @carlfranzen venturebeat.com/ai/huaweis-new…


pytorch-playground - Predefined PyTorch models on popular datasets for learning and benchmarking. #PyTorch #DeepLearning #Quantization

essamamdani's tweet image. pytorch-playground - Predefined PyTorch models on popular datasets for learning and benchmarking. #PyTorch #DeepLearning #Quantization

Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance. Paper: arxiv.org/pdf/2509.23500 [1/5]

AshkboosSaleh's tweet image. Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance.

Paper: arxiv.org/pdf/2509.23500

[1/5]

SSTQ edges out OpenAI's MXFP4 with semantic-aware precision—10-20% better accuracy on key tasks! Complements xAI's distillation for 3x efficiency. Challenges GPT-OSS-120b, Grok-4; boosts LLaMA-2 13B, Mixtral 8x7B, Falcon 40B. OSS soon—DM for beta! #AI #LLM #Quantization 🔗


🚀 Unveiling SSTQ: Semantic-aware quantization slashing LLM inference costs by 80%! Unified sparsity, precision, & caching via novel math. OSS coming, enterprise beta open! DM for access. #AI #LLM #Quantization 🔗 zetareticula.com


Just implemented the Post Training Quantization on a model and reduced the size by 44% (32bit -> 8bit). Learned tons: PTQ vs QAT, symmetric vs asymmetric quantization, and how a full PTQ pipeline (CLE, AdaRound, bias correction, activation calibration) #Pytorch #Quantization


AQUA-LLM finds that quantization alone boosts efficiency but reduces accuracy and robustness; combining quantization with fine-tuning recovers performance and adversarial resistance. #AQUA-LLM #LLM #quantization arxiv.org/html/2509.1351…


Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance. Paper: arxiv.org/pdf/2509.23500 [1/5]

AshkboosSaleh's tweet image. Happy to share our new study on the interaction between #optimizers and #quantization! We show how optimizer choice affects quantized model quality and why outlier-based metrics (like Kurtosis and MMR) often fail to predict performance.

Paper: arxiv.org/pdf/2509.23500

[1/5]

#COLM2025 #LLM #Quantization #ReasoningModels #EfficientAI 🚀 Thrilled to introduce our recent work at COLM 2025: “Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models”, presented today at Poster Session 3, #74.

haolibai's tweet image. #COLM2025 #LLM #Quantization #ReasoningModels #EfficientAI 

🚀 Thrilled to introduce our recent work at COLM 2025: “Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models”, presented today at Poster Session 3, #74.

Hey #techart, We all love to use the step function to get two distinct bands but what if you want more than 2 distinct bands? let's talk about #Quantization. Quantization/Posterization is all about mapping a continuous range of values to discrete values.

LayneStokes's tweet image. Hey #techart, 

We all love to use the step function to get two distinct bands but what if you want more than 2 distinct bands?

let&apos;s talk about #Quantization.

Quantization/Posterization is all about mapping a continuous range of values to discrete values.

Let's do a simple #quantization ... Practice ... #tensorflow meetup #Zurich

nickbortolotti's tweet image. Let&apos;s do a simple #quantization ... Practice ... #tensorflow meetup #Zurich

58% of companies are not optimizing their machine learning models, despite the performance gains techniques like #quantization and #pruning can offer. Why? @mjohnk11 has a theory (hint: it's hard!) and is excited to demo easy model optimization solutions at @odsc next week.

RedHat_AI's tweet image. 58% of companies are not optimizing their machine learning models, despite the performance gains techniques like #quantization and #pruning can offer. Why? @mjohnk11 has a theory (hint: it&apos;s hard!) and is excited to demo easy model optimization solutions at @odsc next week.

What is LLM Quantization and How to Use Them? sabrepc.com/blog/deep-lear… #LLM #Quantization

sabrepc's tweet image. What is LLM Quantization and How to Use Them?
sabrepc.com/blog/deep-lear… 

#LLM #Quantization

Book review: Nanomaterials, Vol. 2: Quantization and Entropy #DeGruyter #Nanomaterials #Quantization #Entropy Read more here: ow.ly/H5se50CzZgj

OPNmagazine's tweet image. Book review: Nanomaterials, Vol. 2: Quantization and Entropy

#DeGruyter #Nanomaterials #Quantization #Entropy

Read more here: ow.ly/H5se50CzZgj

arxiv.org/abs/2007.06919 fcos/retinanet에 대한 quantization. 늘 그렇듯(?) 배치놈과 씨름을 좀 하고 quantized 모델의 파인튜닝 과정을 개선. 공개되어있는 레포가 흥미로움. (github.com/blueardour/mod…) #quantization #detection

daily_arXiv's tweet image. arxiv.org/abs/2007.06919

fcos/retinanet에 대한 quantization. 늘 그렇듯(?) 배치놈과 씨름을 좀 하고 quantized 모델의 파인튜닝 과정을 개선. 공개되어있는 레포가 흥미로움. (github.com/blueardour/mod…) #quantization #detection
daily_arXiv's tweet image. arxiv.org/abs/2007.06919

fcos/retinanet에 대한 quantization. 늘 그렇듯(?) 배치놈과 씨름을 좀 하고 quantized 모델의 파인튜닝 과정을 개선. 공개되어있는 레포가 흥미로움. (github.com/blueardour/mod…) #quantization #detection
daily_arXiv's tweet image. arxiv.org/abs/2007.06919

fcos/retinanet에 대한 quantization. 늘 그렇듯(?) 배치놈과 씨름을 좀 하고 quantized 모델의 파인튜닝 과정을 개선. 공개되어있는 레포가 흥미로움. (github.com/blueardour/mod…) #quantization #detection

HQ-VAE: Hierarchical Discrete Representation Learning with Variational Bayes openreview.net/forum?id=1rowo… #autoencoder #quantization #autoencoding

TmlrSub's tweet image. HQ-VAE: Hierarchical Discrete Representation Learning with Variational Bayes

openreview.net/forum?id=1rowo…

#autoencoder #quantization #autoencoding

Home sweet home. Back to my cozy stuff for this winter vacation. #Quantization #QuantumMechanics #Polarization #Oscilliator

nourtra00456201's tweet image. Home sweet home. Back to my cozy stuff for this winter vacation. 

#Quantization #QuantumMechanics #Polarization #Oscilliator

📣Have you heard? Criteo is open-sourcing its automatic KNN indexing library. Get ready to build state-of-the-art indices with no effort! To know more, check our latest @Medium article 👉 tinyurl.com/3vceh9rd #Quantization #Faiss #knnindex #Python

CriteoAILab's tweet image. 📣Have you heard? Criteo is open-sourcing its automatic KNN indexing library. Get ready to build state-of-the-art indices with no effort!
To know more, check our latest @Medium article 👉
tinyurl.com/3vceh9rd

#Quantization #Faiss #knnindex #Python

Want to boost your #AI model’s performance? The top techniques, like pruning, #quantization, and hyperparameter tuning, can make a big difference: helping you run models faster and tackle issues like model drift. Know more: bit.ly/3UV4YzW #AImodel #DeepLearning #ARTiBA

ARTiBA_Insights's tweet image. Want to boost your #AI model’s performance?

The top techniques, like pruning, #quantization, and hyperparameter tuning, can make a big difference: helping you run models faster and tackle issues like model drift. Know more: bit.ly/3UV4YzW

#AImodel #DeepLearning #ARTiBA

الحمد لله، تم نشر بحثتنا بعنوان: Enabling Efficient Training of Convolutional Neural Networks for Histopathology Images #DeepLearning #Quantization #ComputationalPathology link.springer.com/chapter/10.100… وهنا ملخص البحث: youtu.be/vao1KQaktWo

AbuFatimahAlali's tweet image. الحمد لله، تم نشر بحثتنا بعنوان:
Enabling Efficient Training of Convolutional Neural Networks for Histopathology Images
#DeepLearning #Quantization #ComputationalPathology
link.springer.com/chapter/10.100…
وهنا ملخص البحث:
youtu.be/vao1KQaktWo
AbuFatimahAlali's tweet image. الحمد لله، تم نشر بحثتنا بعنوان:
Enabling Efficient Training of Convolutional Neural Networks for Histopathology Images
#DeepLearning #Quantization #ComputationalPathology
link.springer.com/chapter/10.100…
وهنا ملخص البحث:
youtu.be/vao1KQaktWo
AbuFatimahAlali's tweet image. الحمد لله، تم نشر بحثتنا بعنوان:
Enabling Efficient Training of Convolutional Neural Networks for Histopathology Images
#DeepLearning #Quantization #ComputationalPathology
link.springer.com/chapter/10.100…
وهنا ملخص البحث:
youtu.be/vao1KQaktWo

#Landau #quantization of nearly #degenerate bands and full symmetry classification of Landau level crossings #physics #EdSugg #science #condmat @APSPhysics go.aps.org/2YzuQWu

PhysRevB's tweet image. #Landau #quantization of nearly #degenerate bands and full symmetry classification of Landau level crossings
#physics #EdSugg #science #condmat @APSPhysics
go.aps.org/2YzuQWu

Loading...

Something went wrong.


Something went wrong.


United States Trends