#mlperf search results

💡 In its MLPerf debut, the NVIDIA GB300 NVL72 rack-scale system set AI inference performance records, accelerated by the #NVIDIABlackwell Ultra architecture with NVFP4. 🔗 Dive into our #MLPerf Inference v5.1 results and learn more about the full-stack technologies that…

NVIDIADC's tweet image. 💡 In its MLPerf debut, the NVIDIA GB300 NVL72 rack-scale system set AI inference performance records, accelerated by the #NVIDIABlackwell Ultra architecture with NVFP4. 

🔗 Dive into our #MLPerf Inference v5.1 results and learn more about the full-stack technologies that…

🚀MLCommons MLPerf Training v5.1 introduces Flux.1, a 11.9B parameter transformer model for text-to-image generation. Setting a new standard, replacing SDv2, reflecting the latest in generative AI. Read all about it! mlcommons.org/2025/10/traini… #MLPerf #AI #GenerativeAI #Flux1

MLCommons's tweet image. 🚀MLCommons MLPerf Training v5.1 introduces Flux.1, a 11.9B parameter transformer model for text-to-image generation. Setting a new standard, replacing SDv2, reflecting the latest in generative AI. 
Read all about it! 
mlcommons.org/2025/10/traini…

#MLPerf #AI #GenerativeAI #Flux1

📣 In the latest #MLPerf Inference v5.1 round, the #NVIDIABlackwell Ultra platform delivered outstanding performance with the first submission of the NVIDIA GB300 NVL72 rack-scale system, achieving the highest throughput on the new DeepSeek-R1 reasoning #inference benchmark. 🔗…

NVIDIADC's tweet image. 📣 In the latest #MLPerf Inference v5.1 round, the #NVIDIABlackwell Ultra platform delivered outstanding performance with the first submission of the NVIDIA GB300 NVL72 rack-scale system, achieving the highest throughput on the new DeepSeek-R1 reasoning #inference benchmark.

🔗…

💡 Check out the performance results from our latest #MLPerf Inference v5.0 round. #NVIDIABlackwell achieved outstanding results across the board, including the first submission of the NVIDIA GB200 NVL72 system, which delivered up to 30x more throughput on the Llama 3.1 405B…

NVIDIAHPCDev's tweet image. 💡 Check out the performance results from our latest #MLPerf Inference v5.0 round. 

#NVIDIABlackwell achieved outstanding results across the board, including the first submission of the NVIDIA GB200 NVL72 system, which delivered up to 30x more throughput on the Llama 3.1 405B…

Intel Xeon 6 with P-cores (the only server CPU in MLPerf 🎉) showcased exceptional performance across key #MLPerf Inference v5.0 benchmarks – ResNet50, RetinaNet, 3D-UNet and the new GNN-RGAT, achieving 1.9x performance improvement over 5th Gen Xeon. More: intel.ly/3FRmhhf

intelnews's tweet image. Intel Xeon 6 with P-cores (the only server CPU in MLPerf 🎉) showcased exceptional performance across key #MLPerf Inference v5.0 benchmarks – ResNet50, RetinaNet, 3D-UNet and the new GNN-RGAT, achieving 1.9x performance improvement over 5th Gen Xeon.
More: intel.ly/3FRmhhf

In the latest #MLPerf benchmarks, NVIDIA led with GH200, H100, and L4 GPUs, plus Jetson Orin modules, excelling in #AI from cloud to edge. The Jetson Orin achieved an 84% boost in object detection, vital for #edgeAI and #robotics. Learn more > nvda.ws/44QrOeF

NVIDIARobotics's tweet image. In the latest #MLPerf benchmarks, NVIDIA led with GH200, H100, and L4 GPUs, plus Jetson Orin modules, excelling in #AI from cloud to edge. The Jetson Orin achieved an 84% boost in object detection, vital for #edgeAI and #robotics. Learn more > nvda.ws/44QrOeF

📣 In the latest #MLPerf Inference v5.1 round, the #NVIDIABlackwell Ultra platform delivered outstanding performance with the first submission of the NVIDIA GB300 NVL72 rack-scale system, achieving the highest throughput on the new DeepSeek-R1 reasoning #inference benchmark. 🔗…

NVIDIAAP's tweet image. 📣 In the latest #MLPerf Inference v5.1 round, the #NVIDIABlackwell Ultra platform delivered outstanding performance with the first submission of the NVIDIA GB300 NVL72 rack-scale system, achieving the highest throughput on the new DeepSeek-R1 reasoning #inference benchmark.
🔗…

TinyML benchmarks finally address real-world deployment with @MLCommons ' new streaming benchmark in @MLPerf Tiny v1.3. Tests 20-minute continuous wake word detection while measuring power and duty cycle. Technical deep dive: mlcommons.org/2025/09/mlperf… #MLPerf #TinyML #EdgeAI

MLCommons's tweet image. TinyML benchmarks finally address real-world deployment with @MLCommons ' new streaming benchmark in @MLPerf  Tiny v1.3. 
Tests 20-minute continuous wake word detection while measuring power and duty cycle.
Technical deep dive: mlcommons.org/2025/09/mlperf… #MLPerf #TinyML #EdgeAI

👀 In #MLPerf Training v4.0, we set new #generativeAI training performance records and continued to deliver the highest performance on every workload✨🏆 Technical Deep Dive ➡️nvda.ws/3z7h2X6 Performance delivered using the full stack of NVIDIA software and hardware.

NVIDIAAIDev's tweet image. 👀 In #MLPerf Training v4.0, we set new #generativeAI training performance records and continued to deliver the highest performance on every workload✨🏆 

Technical Deep Dive ➡️nvda.ws/3z7h2X6

Performance delivered using the full stack of NVIDIA software and hardware.

First #MLPerf scores for AMD MI300X and Nvidia Blackwell #GPUs, plus startup Untether, show comparable results to market leader Nvidia. eetimes.com/amd-and-unteth…

eetimes's tweet image. First #MLPerf scores for AMD MI300X and Nvidia Blackwell #GPUs, plus startup Untether, show comparable results to market leader Nvidia. eetimes.com/amd-and-unteth…

A perspective on public #MLPerf Storage benchmark results presented today by #Volumez #MultiCloud #BlockStorage #SaaS #SDS #FastIO #Automation #ITPT

CDP_FST's tweet image. A perspective on public #MLPerf Storage benchmark results presented today by #Volumez #MultiCloud #BlockStorage #SaaS #SDS #FastIO #Automation #ITPT

I grabbed some data from the #MLPerf Inference 4.1 @MLCommons released today and distilled a few interesting diagrams from them: Instinct MI300X vs. H200 B200 vs. H200 vs. GH200 vs. Instinct MI300X EMR vs. GNR TPUv5e vs TPUv6e vs. GH200 hardwareluxx.de/index.php/news…

aschilling's tweet image. I grabbed some data from the #MLPerf Inference 4.1 @MLCommons released today and distilled a few interesting diagrams from them:

Instinct MI300X vs. H200
 B200 vs. H200 vs. GH200 vs. Instinct MI300X
EMR vs. GNR
TPUv5e vs TPUv6e vs. GH200

hardwareluxx.de/index.php/news…
aschilling's tweet image. I grabbed some data from the #MLPerf Inference 4.1 @MLCommons released today and distilled a few interesting diagrams from them:

Instinct MI300X vs. H200
 B200 vs. H200 vs. GH200 vs. Instinct MI300X
EMR vs. GNR
TPUv5e vs TPUv6e vs. GH200

hardwareluxx.de/index.php/news…
aschilling's tweet image. I grabbed some data from the #MLPerf Inference 4.1 @MLCommons released today and distilled a few interesting diagrams from them:

Instinct MI300X vs. H200
 B200 vs. H200 vs. GH200 vs. Instinct MI300X
EMR vs. GNR
TPUv5e vs TPUv6e vs. GH200

hardwareluxx.de/index.php/news…
aschilling's tweet image. I grabbed some data from the #MLPerf Inference 4.1 @MLCommons released today and distilled a few interesting diagrams from them:

Instinct MI300X vs. H200
 B200 vs. H200 vs. GH200 vs. Instinct MI300X
EMR vs. GNR
TPUv5e vs TPUv6e vs. GH200

hardwareluxx.de/index.php/news…

🗞️NVIDIA Hopper takes lead in Generative AI on MLPerf! 🚀 See 🔗nvda.ws/3J1knIW ⬅️ 🥇In the latest (4th) round of #MLPerf performance benchmarking - the 'gold standard' for #AI workload #testing - the formidable Llama 2 70B and Stable Diffusion XL are center stage…

sallyeaves's tweet image. 🗞️NVIDIA Hopper takes lead in Generative AI on MLPerf! 🚀 
See 🔗nvda.ws/3J1knIW ⬅️ 
🥇In the latest (4th) round of #MLPerf performance benchmarking - the 'gold standard' for #AI workload #testing - the formidable Llama 2 70B and Stable Diffusion XL are center stage…
sallyeaves's tweet image. 🗞️NVIDIA Hopper takes lead in Generative AI on MLPerf! 🚀 
See 🔗nvda.ws/3J1knIW ⬅️ 
🥇In the latest (4th) round of #MLPerf performance benchmarking - the 'gold standard' for #AI workload #testing - the formidable Llama 2 70B and Stable Diffusion XL are center stage…

The NVIDIA Blackwell platform set records in the latest #MLPerf Training v5.0 round and debuted the first training submission using the NVIDIA GB200 NVL72 system, which achieved up to 2.6x more training performance per GPU compared to NVIDIA Hopper. nvda.ws/3T5Sq7X


In the latest round of #MLPerf Training, the #NVIDIABlackwell platform delivered impressive results across all tests, with up to 2.2X more performance per GPU for #LLM training. nvda.ws/40SgvnN

NVIDIAAP's tweet image. In the latest round of #MLPerf Training, the #NVIDIABlackwell platform delivered impressive results across all tests, with up to 2.2X more performance per GPU for #LLM training. nvda.ws/40SgvnN

The NVIDIA accelerated computing platform, powered by NVIDIA Hopper GPUs and NVIDIA Quantum-2 InfiniBand networking, delivered exceptional #AI training performance in the latest #MLPerf benchmarks. nvda.ws/3RqOcaq

NVIDIAAP's tweet image. The NVIDIA accelerated computing platform, powered by NVIDIA Hopper GPUs and NVIDIA Quantum-2 InfiniBand networking, delivered exceptional #AI training performance in the latest #MLPerf benchmarks. nvda.ws/3RqOcaq

#MLPerf v4.0 inference results are in, showcasing the rise of #generativeAI. NVIDIA Jetson Orin at the forefront of the edge category, as the only embedded edge platform capable of running any kind of model including GPT-J and Stable Diffusion XL. nvda.ws/3U5BTCj

NVIDIARobotics's tweet image. #MLPerf v4.0 inference results are in, showcasing the rise of #generativeAI. NVIDIA Jetson Orin at the forefront of the edge category, as the only embedded edge platform capable of running any kind of model including GPT-J and Stable Diffusion XL. nvda.ws/3U5BTCj

In the latest #MLPerf benchmarks, NVIDIA H200 Tensor Core GPUs running TensorRT-LLM software delivered the fastest Llama 2 70B inference performance in MLPerf's biggest test of #generativeAI to date. nvda.ws/3TWzWrW

NVIDIAAP's tweet image. In the latest #MLPerf benchmarks, NVIDIA H200 Tensor Core GPUs running TensorRT-LLM software delivered the fastest Llama 2 70B inference performance in MLPerf's biggest test of #generativeAI to date. nvda.ws/3TWzWrW

MLPerf Training v5.1 now features Llama 3.1 8B, a new pretraining benchmark! This brings modern LLM evaluation to single-node systems, lowering the barrier to entry while maintaining relevance to current AI development. mlcommons.org/2025/10/traini… #MLPerf #LLaMA3_1 #AI


🚀MLCommons MLPerf Training v5.1 introduces Flux.1, a 11.9B parameter transformer model for text-to-image generation. Setting a new standard, replacing SDv2, reflecting the latest in generative AI. Read all about it! mlcommons.org/2025/10/traini… #MLPerf #AI #GenerativeAI #Flux1

MLCommons's tweet image. 🚀MLCommons MLPerf Training v5.1 introduces Flux.1, a 11.9B parameter transformer model for text-to-image generation. Setting a new standard, replacing SDv2, reflecting the latest in generative AI. 
Read all about it! 
mlcommons.org/2025/10/traini…

#MLPerf #AI #GenerativeAI #Flux1

rolv.ai sets the bar: ROLV Unit quantifies 99%+ energy savings in sparse matrices. Promote sustainability—use the HF calculator: huggingface.co/spaces/rolvai/… #GreenComputing #MLPerf @satyanadella @Microsoft


MLPerf Inference v5.1: Key Insights for AI Researchers and Decision-Makers #MLPerf #AIBenchmarking #MachineLearning #AIResearch #PowerEfficiency itinai.com/mlperf-inferen… Understanding MLPerf Inference v5.1 MLPerf Inference v5.1 is a crucial benchmark for evaluating the perform…

vlruso's tweet image. MLPerf Inference v5.1: Key Insights for AI Researchers and Decision-Makers #MLPerf #AIBenchmarking #MachineLearning #AIResearch #PowerEfficiency
itinai.com/mlperf-inferen…

Understanding MLPerf Inference v5.1

MLPerf Inference v5.1 is a crucial benchmark for evaluating the perform…

JUST IN: MLPerf Inference v5.1 results (2025) show new workloads & expanded interactive serving. 27 submitters incl. AMD Instinct MI355X, Intel Arc Pro B60, NVIDIA GB300, RTX 4000, RTX Pro 6000. LatLngency metrics emphasized in tests. #MLPerf #Inference


AI is putting new pressures on compute infrastructure. In the latest MLPerf benchmarks, HPE ProLiant servers earned 8 #1 results across AI-driven recommendations, LLMs, & speech recognition. hpe.to/6018Ae6yi #AI #MLPerf #HPEProLiant #LLM #EnterpriseIT #ScalableAI


🚀 New #MLPerf Storage v2.0 results are in! #JuiceFS delivers top-tier performance for #AITraining: ✅ Supports up to 500 H100 GPUs ✅ 72% #BandwidthUtilization on Ethernet (far exceeding other vendors) See the analysis: juicefs.com/en/blog/engine… #DistributedFileSystem #AIStorage


MLPerf v5.1 cleans up the conversation. Faster is welcome, consistent is better. Floor rises for the apps people touch daily. #MLPerf #AI #MLOps #Inference #Performance #Reliability #Apps #Tech


TinyML benchmarks finally address real-world deployment with @MLCommons ' new streaming benchmark in @MLPerf Tiny v1.3. Tests 20-minute continuous wake word detection while measuring power and duty cycle. Technical deep dive: mlcommons.org/2025/09/mlperf… #MLPerf #TinyML #EdgeAI

MLCommons's tweet image. TinyML benchmarks finally address real-world deployment with @MLCommons ' new streaming benchmark in @MLPerf  Tiny v1.3. 
Tests 20-minute continuous wake word detection while measuring power and duty cycle.
Technical deep dive: mlcommons.org/2025/09/mlperf… #MLPerf #TinyML #EdgeAI

El siguiente paso no es si tu GPU gana un gráfico, sino cómo ensamblas cómputo, memoria y red para que el tiempo-a-modelo sea negocio. ¿Cuánta “arquitectura” hay detrás de cada punto del benchmark? 🌍✨ Sigue @Luziatech para más. Fuente: mlcommons.org/2025/06/mlperf… #Luziatech #MLPerf

LuzIAtech's tweet image. El siguiente paso no es si tu GPU gana un gráfico, sino cómo ensamblas cómputo, memoria y red para que el tiempo-a-modelo sea negocio. ¿Cuánta “arquitectura” hay detrás de cada punto del benchmark? 🌍✨ Sigue @Luziatech para más. Fuente: mlcommons.org/2025/06/mlperf… #Luziatech #MLPerf

MLPerf v5.1 is a reminder that benchmarks shape buying conversations. Public scores are not the whole story yet they force clearer claims on speed, cost, and reliability. #MLPerf #AI #MLOps #Inference #Benchmarks #Procurement #Reliability #Tech #Apple #TechUpdate


📣 In the latest #MLPerf Inference v5.1 round, the #NVIDIABlackwell Ultra platform delivered outstanding performance with the first submission of the NVIDIA GB300 NVL72 rack-scale system, achieving the highest throughput on the new DeepSeek-R1 reasoning #inference benchmark. 🔗…

NVIDIAAP's tweet image. 📣 In the latest #MLPerf Inference v5.1 round, the #NVIDIABlackwell Ultra platform delivered outstanding performance with the first submission of the NVIDIA GB300 NVL72 rack-scale system, achieving the highest throughput on the new DeepSeek-R1 reasoning #inference benchmark.
🔗…

📢 Join us for JuiceFS Office Hours #2! Topic: JuiceFS @ MLPerf Storage v2.0 🚀 🗓 Sept 25, 17:00–17:45 (UTC-7) 🎤 Feihu Mo, Storage System Engineer ✅ Performance in MLPerf Storage v2.0 ✅ Live Q&A 📝 Register: luma.com/6giiy6z7 #AI #MLPerf #Storage #JuiceFS


AMD's MI355X GPU delivers 2.7x more tokens/sec vs MI325X in MLPerf v5.1, thanks to FP4 precision. The Llama 2 70B results highlight AMD's push for scalable, cost-efficient AI,positioning Instinct GPUs as enterprise-ready alternatives to Nvidia #AMD #MI355X #MLPerf #FP4 #AI #tech

techvartta's tweet image. AMD's MI355X GPU delivers 2.7x more tokens/sec vs MI325X in MLPerf v5.1, thanks to FP4 precision. The Llama 2 70B results highlight AMD's push for scalable, cost-efficient AI,positioning Instinct GPUs as enterprise-ready alternatives to Nvidia
#AMD #MI355X #MLPerf #FP4 #AI #tech

The latest #MLPerf inference results are in and they show #DeepSparse providing ~50x improvements over baseline BERT-Large reference implementation on both AWS ARM and GCP x86 instances. See how and replicate our results today: neuralmagic.com/blog/latest-ml…

RedHat_AI's tweet image. The latest #MLPerf inference results are in and they show #DeepSparse providing ~50x improvements over baseline BERT-Large reference implementation on both AWS ARM and GCP x86 instances.

See how and replicate our results today: neuralmagic.com/blog/latest-ml…

Intel Xeon 6 with P-cores (the only server CPU in MLPerf 🎉) showcased exceptional performance across key #MLPerf Inference v5.0 benchmarks – ResNet50, RetinaNet, 3D-UNet and the new GNN-RGAT, achieving 1.9x performance improvement over 5th Gen Xeon. More: intel.ly/3FRmhhf

intelnews's tweet image. Intel Xeon 6 with P-cores (the only server CPU in MLPerf 🎉) showcased exceptional performance across key #MLPerf Inference v5.0 benchmarks – ResNet50, RetinaNet, 3D-UNet and the new GNN-RGAT, achieving 1.9x performance improvement over 5th Gen Xeon.
More: intel.ly/3FRmhhf

#MLPerf Inference v3.0 results are out! We delivered a 6X improvement over our previous submission 6 months ago, elevating our overall CPU performance to an astounding 1,000X while reducing power consumption by 92%. This is the power of software. Details 👇

RedHat_AI's tweet image. #MLPerf Inference v3.0 results are out!

We delivered a 6X improvement over our previous submission 6 months ago, elevating our overall CPU performance to an astounding 1,000X while reducing power consumption by 92%.

This is the power of software. Details 👇

The NVIDIA accelerated computing platform, powered by NVIDIA Hopper GPUs and NVIDIA Quantum-2 InfiniBand networking, delivered exceptional AI training performance in the latest #MLPerf benchmarks. nvda.ws/3z0rSOQ

NVIDIADC's tweet image. The NVIDIA accelerated computing platform, powered by NVIDIA Hopper GPUs and NVIDIA Quantum-2 InfiniBand networking, delivered exceptional AI training performance in the latest #MLPerf benchmarks.
nvda.ws/3z0rSOQ

NVIDIA's submission to the new #MLPerf Network division #datacenter benchmark highlights NVIDIA InfiniBand and GPUDirect RDMA capabilities for end-to-end inference. Learn more: nvda.ws/3O5wfgG

NVIDIAHPCDev's tweet image. NVIDIA's submission to the new #MLPerf Network division #datacenter benchmark highlights NVIDIA InfiniBand and GPUDirect RDMA capabilities for end-to-end inference. Learn more: nvda.ws/3O5wfgG

In the latest #MLPerf benchmarks, NVIDIA led with GH200, H100, and L4 GPUs, plus Jetson Orin modules, excelling in #AI from cloud to edge. The Jetson Orin achieved an 84% boost in object detection, vital for #edgeAI and #robotics. Learn more > nvda.ws/44QrOeF

NVIDIARobotics's tweet image. In the latest #MLPerf benchmarks, NVIDIA led with GH200, H100, and L4 GPUs, plus Jetson Orin modules, excelling in #AI from cloud to edge. The Jetson Orin achieved an 84% boost in object detection, vital for #edgeAI and #robotics. Learn more > nvda.ws/44QrOeF

In the latest round of #MLPerf Training, the #NVIDIABlackwell platform delivered impressive results across all tests, with up to 2.2X more performance per GPU for #LLM training. nvda.ws/4hE6tgh

NVIDIADC's tweet image. In the latest round of #MLPerf Training, the #NVIDIABlackwell platform delivered impressive results across all tests, with up to 2.2X more performance per GPU for #LLM training.
nvda.ws/4hE6tgh

Learn how NVIDIA set new #generativeAI training performance records and continued to deliver the highest performance on every workload in the latest #MLPerf benchmarks. nvda.ws/4cdU6Uv

NVIDIADC's tweet image. Learn how NVIDIA set new #generativeAI training performance records and continued to deliver the highest performance on every workload in the latest #MLPerf benchmarks.
nvda.ws/4cdU6Uv

Learn how the NVIDIA GH200 Grace Hopper Superchip, which combines the NVIDIA Grace CPU and NVIDIA Hopper GPU, delivered leading performance on every #datacenter workload on industry-standard #MLPerf Inference v3.1 benchmarks. nvda.ws/3PJOOHY

NVIDIAHPCDev's tweet image. Learn how the NVIDIA GH200 Grace Hopper Superchip, which combines the NVIDIA Grace CPU and NVIDIA Hopper GPU, delivered leading performance on every #datacenter workload on industry-standard #MLPerf Inference v3.1 benchmarks. nvda.ws/3PJOOHY

👀 In #MLPerf Training v4.0, we set new #generativeAI training performance records and continued to deliver the highest performance on every workload✨🏆 Technical Deep Dive ➡️nvda.ws/3z7h2X6 Performance delivered using the full stack of NVIDIA software and hardware.

NVIDIAAIDev's tweet image. 👀 In #MLPerf Training v4.0, we set new #generativeAI training performance records and continued to deliver the highest performance on every workload✨🏆 

Technical Deep Dive ➡️nvda.ws/3z7h2X6

Performance delivered using the full stack of NVIDIA software and hardware.

Intel Showcases “AI Everywhere” Strategy in MLPerf Inferencing v3.1 #MLPerf @MLCommons @intel #HPC ow.ly/7Pak50PN4Rj

HPCwire's tweet image. Intel Showcases “AI Everywhere” Strategy in MLPerf Inferencing v3.1  #MLPerf @MLCommons @intel #HPC

ow.ly/7Pak50PN4Rj

💡 Check out the performance results from our latest #MLPerf Inference v5.0 round. #NVIDIABlackwell achieved outstanding results across the board, including the first submission of the NVIDIA GB200 NVL72 system, which delivered up to 30x more throughput on the Llama 3.1 405B…

NVIDIAHPCDev's tweet image. 💡 Check out the performance results from our latest #MLPerf Inference v5.0 round. 

#NVIDIABlackwell achieved outstanding results across the board, including the first submission of the NVIDIA GB200 NVL72 system, which delivered up to 30x more throughput on the Llama 3.1 405B…

The latest #MLperf benchmarks are out. Not unsurprisingly, Nvidia was the leader across many categories. ow.ly/qifV50PRIlx #HPC #HPCperformance #GPUs

HPCwire's tweet image. The latest #MLperf benchmarks are out. Not unsurprisingly, Nvidia was the leader across many categories. ow.ly/qifV50PRIlx #HPC #HPCperformance #GPUs

Gaudi2, 4th Gen Xeon Show Strength in MLPerf Training 3.1, but still Trail Nvidia #MLPerf hpcwire.com/2023/11/10/gau…

HPCwire's tweet image. Gaudi2, 4th Gen Xeon Show Strength in MLPerf Training 3.1, but still Trail Nvidia #MLPerf

hpcwire.com/2023/11/10/gau…

Compound sparsity FTW! 💯 Neural Magic's recent #MLPerf benchmarks show a 92% more energy-efficient NLP execution compared to other providers. ♻️ Compound sparsity techniques and smart inferencing in DeepSparse are pushing the boundaries of #AI efficiency! #Sustainability

RedHat_AI's tweet image. Compound sparsity FTW! 💯

Neural Magic's recent #MLPerf benchmarks show a 92% more energy-efficient NLP execution compared to other providers. ♻️

Compound sparsity techniques and smart inferencing in DeepSparse are pushing the boundaries of #AI efficiency! #Sustainability

MLPerf Inference v5.1: Key Insights for AI Researchers and Decision-Makers #MLPerf #AIBenchmarking #MachineLearning #AIResearch #PowerEfficiency itinai.com/mlperf-inferen… Understanding MLPerf Inference v5.1 MLPerf Inference v5.1 is a crucial benchmark for evaluating the perform…

vlruso's tweet image. MLPerf Inference v5.1: Key Insights for AI Researchers and Decision-Makers #MLPerf #AIBenchmarking #MachineLearning #AIResearch #PowerEfficiency
itinai.com/mlperf-inferen…

Understanding MLPerf Inference v5.1

MLPerf Inference v5.1 is a crucial benchmark for evaluating the perform…

#MLPerf v4.0 inference results are in, showcasing the rise of #generativeAI. NVIDIA Jetson Orin at the forefront of the edge category, as the only embedded edge platform capable of running any kind of model including GPT-J and Stable Diffusion XL. nvda.ws/3U5BTCj

NVIDIARobotics's tweet image. #MLPerf v4.0 inference results are in, showcasing the rise of #generativeAI. NVIDIA Jetson Orin at the forefront of the edge category, as the only embedded edge platform capable of running any kind of model including GPT-J and Stable Diffusion XL. nvda.ws/3U5BTCj

In the latest #MLPerf benchmarks, NVIDIA H200 Tensor Core GPUs running TensorRT-LLM software delivered the fastest Llama 2 70B inference performance in MLPerf's biggest test of #generativeAI to date. nvda.ws/3TWzWrW

NVIDIAAP's tweet image. In the latest #MLPerf benchmarks, NVIDIA H200 Tensor Core GPUs running TensorRT-LLM software delivered the fastest Llama 2 70B inference performance in MLPerf's biggest test of #generativeAI to date. nvda.ws/3TWzWrW

First #MLPerf scores for AMD MI300X and Nvidia Blackwell #GPUs, plus startup Untether, show comparable results to market leader Nvidia. eetimes.com/amd-and-unteth…

eetimes's tweet image. First #MLPerf scores for AMD MI300X and Nvidia Blackwell #GPUs, plus startup Untether, show comparable results to market leader Nvidia. eetimes.com/amd-and-unteth…

Loading...

Something went wrong.


Something went wrong.


United States Trends