#mlperf search results
💡 In its MLPerf debut, the NVIDIA GB300 NVL72 rack-scale system set AI inference performance records, accelerated by the #NVIDIABlackwell Ultra architecture with NVFP4. 🔗 Dive into our #MLPerf Inference v5.1 results and learn more about the full-stack technologies that…
📣 In the latest #MLPerf Inference v5.1 round, the #NVIDIABlackwell Ultra platform delivered outstanding performance with the first submission of the NVIDIA GB300 NVL72 rack-scale system, achieving the highest throughput on the new DeepSeek-R1 reasoning #inference benchmark. 🔗…
💡 Check out the performance results from our latest #MLPerf Inference v5.0 round. #NVIDIABlackwell achieved outstanding results across the board, including the first submission of the NVIDIA GB200 NVL72 system, which delivered up to 30x more throughput on the Llama 3.1 405B…
Intel Xeon 6 with P-cores (the only server CPU in MLPerf 🎉) showcased exceptional performance across key #MLPerf Inference v5.0 benchmarks – ResNet50, RetinaNet, 3D-UNet and the new GNN-RGAT, achieving 1.9x performance improvement over 5th Gen Xeon. More: intel.ly/3FRmhhf
A perspective on public #MLPerf Storage benchmark results presented today by #Volumez #MultiCloud #BlockStorage #SaaS #SDS #FastIO #Automation #ITPT
🗞️NVIDIA Hopper takes lead in Generative AI on MLPerf! 🚀 See 🔗nvda.ws/3J1knIW ⬅️ 🥇In the latest (4th) round of #MLPerf performance benchmarking - the 'gold standard' for #AI workload #testing - the formidable Llama 2 70B and Stable Diffusion XL are center stage…
In the latest #MLPerf benchmarks, NVIDIA led with GH200, H100, and L4 GPUs, plus Jetson Orin modules, excelling in #AI from cloud to edge. The Jetson Orin achieved an 84% boost in object detection, vital for #edgeAI and #robotics. Learn more > nvda.ws/44QrOeF
🚀MLCommons MLPerf Training v5.1 introduces Flux.1, a 11.9B parameter transformer model for text-to-image generation. Setting a new standard, replacing SDv2, reflecting the latest in generative AI. Read all about it! mlcommons.org/2025/10/traini… #MLPerf #AI #GenerativeAI #Flux1
I grabbed some data from the #MLPerf Inference 4.1 @MLCommons released today and distilled a few interesting diagrams from them: Instinct MI300X vs. H200 B200 vs. H200 vs. GH200 vs. Instinct MI300X EMR vs. GNR TPUv5e vs TPUv6e vs. GH200 hardwareluxx.de/index.php/news…
Read this DatacenterDynamics article to see the latest #MLPerf benchmark performance record that NVIDIA broke with the NVIDIA H200 Tensor Core GPU and TensorRT-LLM software. bit.ly/4cCpNrD
First #MLPerf scores for AMD MI300X and Nvidia Blackwell #GPUs, plus startup Untether, show comparable results to market leader Nvidia. eetimes.com/amd-and-unteth…
👀 In #MLPerf Training v4.0, we set new #generativeAI training performance records and continued to deliver the highest performance on every workload✨🏆 Technical Deep Dive ➡️nvda.ws/3z7h2X6 Performance delivered using the full stack of NVIDIA software and hardware.
TinyML benchmarks finally address real-world deployment with @MLCommons ' new streaming benchmark in @MLPerf Tiny v1.3. Tests 20-minute continuous wake word detection while measuring power and duty cycle. Technical deep dive: mlcommons.org/2025/09/mlperf… #MLPerf #TinyML #EdgeAI
The NVIDIA Blackwell platform set records in the latest #MLPerf Training v5.0 round and debuted the first training submission using the NVIDIA GB200 NVL72 system, which achieved up to 2.6x more training performance per GPU compared to NVIDIA Hopper. nvda.ws/3T5Sq7X
In the latest round of #MLPerf Training, the #NVIDIABlackwell platform delivered impressive results across all tests, with up to 2.2X more performance per GPU for #LLM training. nvda.ws/40SgvnN
#MLPerf v4.0 inference results are in, showcasing the rise of #generativeAI. NVIDIA Jetson Orin at the forefront of the edge category, as the only embedded edge platform capable of running any kind of model including GPT-J and Stable Diffusion XL. nvda.ws/3U5BTCj
In the latest #MLPerf Inference v5.0 round, the NVIDIA Blackwell platform set records — and marked NVIDIA’s first MLPerf submission using the GB200 NVL72 system, which delivered up to 30x more throughput on the Llama 3.1 405B benchmark. nvda.ws/4iX4tji
The NVIDIA accelerated computing platform, powered by NVIDIA Hopper GPUs and NVIDIA Quantum-2 InfiniBand networking, delivered exceptional #AI training performance in the latest #MLPerf benchmarks. nvda.ws/3RqOcaq
MLPerf Training v5.1 now features Llama 3.1 8B, a new pretraining benchmark! This brings modern LLM evaluation to single-node systems, lowering the barrier to entry while maintaining relevance to current AI development. mlcommons.org/2025/10/traini… #MLPerf #LLaMA3_1 #AI
🚀MLCommons MLPerf Training v5.1 introduces Flux.1, a 11.9B parameter transformer model for text-to-image generation. Setting a new standard, replacing SDv2, reflecting the latest in generative AI. Read all about it! mlcommons.org/2025/10/traini… #MLPerf #AI #GenerativeAI #Flux1
rolv.ai sets the bar: ROLV Unit quantifies 99%+ energy savings in sparse matrices. Promote sustainability—use the HF calculator: huggingface.co/spaces/rolvai/… #GreenComputing #MLPerf @satyanadella @Microsoft
MLPerf Inference v5.1: Key Insights for AI Researchers and Decision-Makers #MLPerf #AIBenchmarking #MachineLearning #AIResearch #PowerEfficiency itinai.com/mlperf-inferen… Understanding MLPerf Inference v5.1 MLPerf Inference v5.1 is a crucial benchmark for evaluating the perform…
JUST IN: MLPerf Inference v5.1 results (2025) show new workloads & expanded interactive serving. 27 submitters incl. AMD Instinct MI355X, Intel Arc Pro B60, NVIDIA GB300, RTX 4000, RTX Pro 6000. LatLngency metrics emphasized in tests. #MLPerf #Inference
AI is putting new pressures on compute infrastructure. In the latest MLPerf benchmarks, HPE ProLiant servers earned 8 #1 results across AI-driven recommendations, LLMs, & speech recognition. hpe.to/6018Ae6yi #AI #MLPerf #HPEProLiant #LLM #EnterpriseIT #ScalableAI
🚀 New #MLPerf Storage v2.0 results are in! #JuiceFS delivers top-tier performance for #AITraining: ✅ Supports up to 500 H100 GPUs ✅ 72% #BandwidthUtilization on Ethernet (far exceeding other vendors) See the analysis: juicefs.com/en/blog/engine… #DistributedFileSystem #AIStorage
MLPerf v5.1 cleans up the conversation. Faster is welcome, consistent is better. Floor rises for the apps people touch daily. #MLPerf #AI #MLOps #Inference #Performance #Reliability #Apps #Tech
TinyML benchmarks finally address real-world deployment with @MLCommons ' new streaming benchmark in @MLPerf Tiny v1.3. Tests 20-minute continuous wake word detection while measuring power and duty cycle. Technical deep dive: mlcommons.org/2025/09/mlperf… #MLPerf #TinyML #EdgeAI
El siguiente paso no es si tu GPU gana un gráfico, sino cómo ensamblas cómputo, memoria y red para que el tiempo-a-modelo sea negocio. ¿Cuánta “arquitectura” hay detrás de cada punto del benchmark? 🌍✨ Sigue @Luziatech para más. Fuente: mlcommons.org/2025/06/mlperf… #Luziatech #MLPerf
MLPerf v5.1 is a reminder that benchmarks shape buying conversations. Public scores are not the whole story yet they force clearer claims on speed, cost, and reliability. #MLPerf #AI #MLOps #Inference #Benchmarks #Procurement #Reliability #Tech #Apple #TechUpdate
NVIDIA Blackwell Ultra Sets the Bar in New #MLPerf Inference Benchmark @NVIDIAAI #Blackwell #ML liwaiwai.com/2025/09/09/nvi… via @liwaiwaicom
📣 In the latest #MLPerf Inference v5.1 round, the #NVIDIABlackwell Ultra platform delivered outstanding performance with the first submission of the NVIDIA GB300 NVL72 rack-scale system, achieving the highest throughput on the new DeepSeek-R1 reasoning #inference benchmark. 🔗…
📢 Join us for JuiceFS Office Hours #2! Topic: JuiceFS @ MLPerf Storage v2.0 🚀 🗓 Sept 25, 17:00–17:45 (UTC-7) 🎤 Feihu Mo, Storage System Engineer ✅ Performance in MLPerf Storage v2.0 ✅ Live Q&A 📝 Register: luma.com/6giiy6z7 #AI #MLPerf #Storage #JuiceFS
MLPerf Inference v5.1 benchmark results show major AI hardware performance gains across data centers, edge computing, and mobile devices. #AIHardware #MLPerf #EdgeComputing turtlen3ws.blogspot.com/2025/09/mlperf…
AMD's MI355X GPU delivers 2.7x more tokens/sec vs MI325X in MLPerf v5.1, thanks to FP4 precision. The Llama 2 70B results highlight AMD's push for scalable, cost-efficient AI,positioning Instinct GPUs as enterprise-ready alternatives to Nvidia #AMD #MI355X #MLPerf #FP4 #AI #tech
Nvidia Blackwell Ultra vs The Field: Sweeping MLPerf Inference Results in 2025 – GLCND.IO #Nvidia #BlackwellUltra #MLPerf #AIShowdown #GPUs2025 #GLCNDIO glcnd.io/nvidias-blackw…
glcnd.io
Nvidia’s Blackwell Ultra Sweeps MLPerf Inference Results
Nvidia’s Blackwell Ultra Sweeps MLPerf Inference Results
💡 In its MLPerf debut, the NVIDIA GB300 NVL72 rack-scale system set AI inference performance records, accelerated by the #NVIDIABlackwell Ultra architecture with NVFP4. 🔗 Dive into our #MLPerf Inference v5.1 results and learn more about the full-stack technologies that…
#MLPerf Inference v3.0 results are out! We delivered a 6X improvement over our previous submission 6 months ago, elevating our overall CPU performance to an astounding 1,000X while reducing power consumption by 92%. This is the power of software. Details 👇
MLPerf Inference v5.1: Key Insights for AI Researchers and Decision-Makers #MLPerf #AIBenchmarking #MachineLearning #AIResearch #PowerEfficiency itinai.com/mlperf-inferen… Understanding MLPerf Inference v5.1 MLPerf Inference v5.1 is a crucial benchmark for evaluating the perform…
NVIDIA's submission to the new #MLPerf Network division #datacenter benchmark highlights NVIDIA InfiniBand and GPUDirect RDMA capabilities for end-to-end inference. Learn more: nvda.ws/3O5wfgG
Intel Xeon 6 with P-cores (the only server CPU in MLPerf 🎉) showcased exceptional performance across key #MLPerf Inference v5.0 benchmarks – ResNet50, RetinaNet, 3D-UNet and the new GNN-RGAT, achieving 1.9x performance improvement over 5th Gen Xeon. More: intel.ly/3FRmhhf
The NVIDIA accelerated computing platform, powered by NVIDIA Hopper GPUs and NVIDIA Quantum-2 InfiniBand networking, delivered exceptional AI training performance in the latest #MLPerf benchmarks. nvda.ws/3z0rSOQ
In the latest #MLPerf benchmarks, NVIDIA led with GH200, H100, and L4 GPUs, plus Jetson Orin modules, excelling in #AI from cloud to edge. The Jetson Orin achieved an 84% boost in object detection, vital for #edgeAI and #robotics. Learn more > nvda.ws/44QrOeF
💡 Check out the performance results from our latest #MLPerf Inference v5.0 round. #NVIDIABlackwell achieved outstanding results across the board, including the first submission of the NVIDIA GB200 NVL72 system, which delivered up to 30x more throughput on the Llama 3.1 405B…
The latest #MLPerf inference results are in and they show #DeepSparse providing ~50x improvements over baseline BERT-Large reference implementation on both AWS ARM and GCP x86 instances. See how and replicate our results today: neuralmagic.com/blog/latest-ml…
Learn how NVIDIA set new #generativeAI training performance records and continued to deliver the highest performance on every workload in the latest #MLPerf benchmarks. nvda.ws/4cdU6Uv
Forbes article highlighting how NVIDIA dominated AI benchmarks in the latest #MLPerf tests with the NVIDIA H200 Tensor Core GPU and TensorRT-LLM software. www-forbes-com.cdn.ampproject.org/c/s/www.forbes…
Gaudi2, 4th Gen Xeon Show Strength in MLPerf Training 3.1, but still Trail Nvidia #MLPerf hpcwire.com/2023/11/10/gau…
Intel Showcases “AI Everywhere” Strategy in MLPerf Inferencing v3.1 #MLPerf @MLCommons @intel #HPC ow.ly/7Pak50PN4Rj
📣 MLPerf 훈련 벤치마크 전 항목에서 최고 성능을 달성한 NVIDIA Blackwell! NVIDIA Blackwell 아키텍처 기반 #AI 플랫폼은 유일하게 #MLPerf 훈련 v5.0의 모든 벤치마크 항목에 결과를 제출했는데요. 이 플랫폼은 LLM부터 추천 시스템, 그래프 신경망에 이르기까지 모든 영역에서 탁월한 성능과…
In the latest round of #MLPerf Training, the #NVIDIABlackwell platform delivered impressive results across all tests, with up to 2.2X more performance per GPU for #LLM training. nvda.ws/4hE6tgh
Learn how the NVIDIA GH200 Grace Hopper Superchip, which combines the NVIDIA Grace CPU and NVIDIA Hopper GPU, delivered leading performance on every #datacenter workload on industry-standard #MLPerf Inference v3.1 benchmarks. nvda.ws/3PJOOHY
The latest #MLperf benchmarks are out. Not unsurprisingly, Nvidia was the leader across many categories. ow.ly/qifV50PRIlx #HPC #HPCperformance #GPUs
Compound sparsity FTW! 💯 Neural Magic's recent #MLPerf benchmarks show a 92% more energy-efficient NLP execution compared to other providers. ♻️ Compound sparsity techniques and smart inferencing in DeepSparse are pushing the boundaries of #AI efficiency! #Sustainability
👀 In #MLPerf Training v4.0, we set new #generativeAI training performance records and continued to deliver the highest performance on every workload✨🏆 Technical Deep Dive ➡️nvda.ws/3z7h2X6 Performance delivered using the full stack of NVIDIA software and hardware.
Something went wrong.
Something went wrong.
United States Trends
- 1. Good Wednesday 27.1K posts
- 2. Jameis 2,482 posts
- 3. #wednesdaymotivation 3,909 posts
- 4. Crypto ETFs 2,926 posts
- 5. #Wednesdayvibe 2,059 posts
- 6. Hump Day 12.1K posts
- 7. #hazbinhotelseason2 44.1K posts
- 8. PancakeSwap BNB Chain 2,304 posts
- 9. ADOR 65.9K posts
- 10. Happy Hump 7,589 posts
- 11. #ENGLOTxHOWEAWARDS25 677K posts
- 12. Northern Lights 53.8K posts
- 13. #WednesdayWisdom 22.8K posts
- 14. H-1B 49.4K posts
- 15. Hanni 19.3K posts
- 16. StandX 2,639 posts
- 17. Jack Schlossberg 2,735 posts
- 18. Antarctica 8,985 posts
- 19. Wike 264K posts
- 20. Justified 20.9K posts