#mlperf search results
💡 In its MLPerf debut, the NVIDIA GB300 NVL72 rack-scale system set AI inference performance records, accelerated by the #NVIDIABlackwell Ultra architecture with NVFP4. 🔗 Dive into our #MLPerf Inference v5.1 results and learn more about the full-stack technologies that…
💡 Check out the performance results from our latest #MLPerf Inference v5.0 round. #NVIDIABlackwell achieved outstanding results across the board, including the first submission of the NVIDIA GB200 NVL72 system, which delivered up to 30x more throughput on the Llama 3.1 405B…
📣 In the latest #MLPerf Inference v5.1 round, the #NVIDIABlackwell Ultra platform delivered outstanding performance with the first submission of the NVIDIA GB300 NVL72 rack-scale system, achieving the highest throughput on the new DeepSeek-R1 reasoning #inference benchmark. 🔗…
Intel Xeon 6 with P-cores (the only server CPU in MLPerf 🎉) showcased exceptional performance across key #MLPerf Inference v5.0 benchmarks – ResNet50, RetinaNet, 3D-UNet and the new GNN-RGAT, achieving 1.9x performance improvement over 5th Gen Xeon. More: intel.ly/3FRmhhf
A perspective on public #MLPerf Storage benchmark results presented today by #Volumez #MultiCloud #BlockStorage #SaaS #SDS #FastIO #Automation #ITPT
🚀MLCommons MLPerf Training v5.1 introduces Flux.1, a 11.9B parameter transformer model for text-to-image generation. Setting a new standard, replacing SDv2, reflecting the latest in generative AI. Read all about it! mlcommons.org/2025/10/traini… #MLPerf #AI #GenerativeAI #Flux1
🗞️NVIDIA Hopper takes lead in Generative AI on MLPerf! 🚀 See 🔗nvda.ws/3J1knIW ⬅️ 🥇In the latest (4th) round of #MLPerf performance benchmarking - the 'gold standard' for #AI workload #testing - the formidable Llama 2 70B and Stable Diffusion XL are center stage…
In the latest #MLPerf benchmarks, NVIDIA led with GH200, H100, and L4 GPUs, plus Jetson Orin modules, excelling in #AI from cloud to edge. The Jetson Orin achieved an 84% boost in object detection, vital for #edgeAI and #robotics. Learn more > nvda.ws/44QrOeF
The NVIDIA Blackwell platform set records in the latest #MLPerf Training v5.0 round and debuted the first training submission using the NVIDIA GB200 NVL72 system, which achieved up to 2.6x more training performance per GPU compared to NVIDIA Hopper. nvda.ws/3T5Sq7X
I grabbed some data from the #MLPerf Inference 4.1 @MLCommons released today and distilled a few interesting diagrams from them: Instinct MI300X vs. H200 B200 vs. H200 vs. GH200 vs. Instinct MI300X EMR vs. GNR TPUv5e vs TPUv6e vs. GH200 hardwareluxx.de/index.php/news…
First #MLPerf scores for AMD MI300X and Nvidia Blackwell #GPUs, plus startup Untether, show comparable results to market leader Nvidia. eetimes.com/amd-and-unteth…
👀 In #MLPerf Training v4.0, we set new #generativeAI training performance records and continued to deliver the highest performance on every workload✨🏆 Technical Deep Dive ➡️nvda.ws/3z7h2X6 Performance delivered using the full stack of NVIDIA software and hardware.
TinyML benchmarks finally address real-world deployment with @MLCommons ' new streaming benchmark in @MLPerf Tiny v1.3. Tests 20-minute continuous wake word detection while measuring power and duty cycle. Technical deep dive: mlcommons.org/2025/09/mlperf… #MLPerf #TinyML #EdgeAI
In the latest round of #MLPerf Training, the #NVIDIABlackwell platform delivered impressive results across all tests, with up to 2.2X more performance per GPU for #LLM training. nvda.ws/40SgvnN
The NVIDIA accelerated computing platform, powered by NVIDIA Hopper GPUs and NVIDIA Quantum-2 InfiniBand networking, delivered exceptional #AI training performance in the latest #MLPerf benchmarks. nvda.ws/3RqOcaq
📣 In the latest #MLPerf Inference v5.1 round, the #NVIDIABlackwell Ultra platform delivered outstanding performance with the first submission of the NVIDIA GB300 NVL72 rack-scale system, achieving the highest throughput on the new DeepSeek-R1 reasoning #inference benchmark. 🔗…
In the latest #MLPerf Inference v5.0 round, the NVIDIA Blackwell platform set records — and marked NVIDIA’s first MLPerf submission using the GB200 NVL72 system, which delivered up to 30x more throughput on the Llama 3.1 405B benchmark. nvda.ws/4iX4tji
#MLPerf v4.0 inference results are in, showcasing the rise of #generativeAI. NVIDIA Jetson Orin at the forefront of the edge category, as the only embedded edge platform capable of running any kind of model including GPT-J and Stable Diffusion XL. nvda.ws/3U5BTCj
MLPerf Training v5.1 now features Llama 3.1 8B, a new pretraining benchmark! This brings modern LLM evaluation to single-node systems, lowering the barrier to entry while maintaining relevance to current AI development. mlcommons.org/2025/10/traini… #MLPerf #LLaMA3_1 #AI
🚀MLCommons MLPerf Training v5.1 introduces Flux.1, a 11.9B parameter transformer model for text-to-image generation. Setting a new standard, replacing SDv2, reflecting the latest in generative AI. Read all about it! mlcommons.org/2025/10/traini… #MLPerf #AI #GenerativeAI #Flux1
rolv.ai sets the bar: ROLV Unit quantifies 99%+ energy savings in sparse matrices. Promote sustainability—use the HF calculator: huggingface.co/spaces/rolvai/… #GreenComputing #MLPerf @satyanadella @Microsoft
MLPerf Inference v5.1: Key Insights for AI Researchers and Decision-Makers #MLPerf #AIBenchmarking #MachineLearning #AIResearch #PowerEfficiency itinai.com/mlperf-inferen… Understanding MLPerf Inference v5.1 MLPerf Inference v5.1 is a crucial benchmark for evaluating the perform…
JUST IN: MLPerf Inference v5.1 results (2025) show new workloads & expanded interactive serving. 27 submitters incl. AMD Instinct MI355X, Intel Arc Pro B60, NVIDIA GB300, RTX 4000, RTX Pro 6000. LatLngency metrics emphasized in tests. #MLPerf #Inference
AI is putting new pressures on compute infrastructure. In the latest MLPerf benchmarks, HPE ProLiant servers earned 8 #1 results across AI-driven recommendations, LLMs, & speech recognition. hpe.to/6018Ae6yi #AI #MLPerf #HPEProLiant #LLM #EnterpriseIT #ScalableAI
🚀 New #MLPerf Storage v2.0 results are in! #JuiceFS delivers top-tier performance for #AITraining: ✅ Supports up to 500 H100 GPUs ✅ 72% #BandwidthUtilization on Ethernet (far exceeding other vendors) See the analysis: juicefs.com/en/blog/engine… #DistributedFileSystem #AIStorage
MLPerf v5.1 cleans up the conversation. Faster is welcome, consistent is better. Floor rises for the apps people touch daily. #MLPerf #AI #MLOps #Inference #Performance #Reliability #Apps #Tech
TinyML benchmarks finally address real-world deployment with @MLCommons ' new streaming benchmark in @MLPerf Tiny v1.3. Tests 20-minute continuous wake word detection while measuring power and duty cycle. Technical deep dive: mlcommons.org/2025/09/mlperf… #MLPerf #TinyML #EdgeAI
El siguiente paso no es si tu GPU gana un gráfico, sino cómo ensamblas cómputo, memoria y red para que el tiempo-a-modelo sea negocio. ¿Cuánta “arquitectura” hay detrás de cada punto del benchmark? 🌍✨ Sigue @Luziatech para más. Fuente: mlcommons.org/2025/06/mlperf… #Luziatech #MLPerf
MLPerf v5.1 is a reminder that benchmarks shape buying conversations. Public scores are not the whole story yet they force clearer claims on speed, cost, and reliability. #MLPerf #AI #MLOps #Inference #Benchmarks #Procurement #Reliability #Tech #Apple #TechUpdate
NVIDIA Blackwell Ultra Sets the Bar in New #MLPerf Inference Benchmark @NVIDIAAI #Blackwell #ML liwaiwai.com/2025/09/09/nvi… via @liwaiwaicom
📣 In the latest #MLPerf Inference v5.1 round, the #NVIDIABlackwell Ultra platform delivered outstanding performance with the first submission of the NVIDIA GB300 NVL72 rack-scale system, achieving the highest throughput on the new DeepSeek-R1 reasoning #inference benchmark. 🔗…
📢 Join us for JuiceFS Office Hours #2! Topic: JuiceFS @ MLPerf Storage v2.0 🚀 🗓 Sept 25, 17:00–17:45 (UTC-7) 🎤 Feihu Mo, Storage System Engineer ✅ Performance in MLPerf Storage v2.0 ✅ Live Q&A 📝 Register: luma.com/6giiy6z7 #AI #MLPerf #Storage #JuiceFS
MLPerf Inference v5.1 benchmark results show major AI hardware performance gains across data centers, edge computing, and mobile devices. #AIHardware #MLPerf #EdgeComputing turtlen3ws.blogspot.com/2025/09/mlperf…
turtlen3ws.blogspot.com
MLPerf Inference v5.1 Benchmark Reveals Accelerating Pace of AI Hardware Competition
MLPerf Inference v5.1 Benchmark Reveals Accelerating Pace of AI Hardware Competition
AMD's MI355X GPU delivers 2.7x more tokens/sec vs MI325X in MLPerf v5.1, thanks to FP4 precision. The Llama 2 70B results highlight AMD's push for scalable, cost-efficient AI,positioning Instinct GPUs as enterprise-ready alternatives to Nvidia #AMD #MI355X #MLPerf #FP4 #AI #tech
Nvidia Blackwell Ultra vs The Field: Sweeping MLPerf Inference Results in 2025 – GLCND.IO #Nvidia #BlackwellUltra #MLPerf #AIShowdown #GPUs2025 #GLCNDIO glcnd.io/nvidias-blackw…
glcnd.io
Nvidia’s Blackwell Ultra Sweeps MLPerf Inference Results
Nvidia’s Blackwell Ultra Sweeps MLPerf Inference Results
MLPerf Inference v5.1: Key Insights for AI Researchers and Decision-Makers #MLPerf #AIBenchmarking #MachineLearning #AIResearch #PowerEfficiency itinai.com/mlperf-inferen… Understanding MLPerf Inference v5.1 MLPerf Inference v5.1 is a crucial benchmark for evaluating the perform…
Intel Xeon 6 with P-cores (the only server CPU in MLPerf 🎉) showcased exceptional performance across key #MLPerf Inference v5.0 benchmarks – ResNet50, RetinaNet, 3D-UNet and the new GNN-RGAT, achieving 1.9x performance improvement over 5th Gen Xeon. More: intel.ly/3FRmhhf
#MLPerf Inference v3.0 results are out! We delivered a 6X improvement over our previous submission 6 months ago, elevating our overall CPU performance to an astounding 1,000X while reducing power consumption by 92%. This is the power of software. Details 👇
💡 Check out the performance results from our latest #MLPerf Inference v5.0 round. #NVIDIABlackwell achieved outstanding results across the board, including the first submission of the NVIDIA GB200 NVL72 system, which delivered up to 30x more throughput on the Llama 3.1 405B…
In the latest #MLPerf benchmarks, NVIDIA led with GH200, H100, and L4 GPUs, plus Jetson Orin modules, excelling in #AI from cloud to edge. The Jetson Orin achieved an 84% boost in object detection, vital for #edgeAI and #robotics. Learn more > nvda.ws/44QrOeF
The NVIDIA accelerated computing platform, powered by NVIDIA Hopper GPUs and NVIDIA Quantum-2 InfiniBand networking, delivered exceptional AI training performance in the latest #MLPerf benchmarks. nvda.ws/3z0rSOQ
The latest #MLPerf inference results are in and they show #DeepSparse providing ~50x improvements over baseline BERT-Large reference implementation on both AWS ARM and GCP x86 instances. See how and replicate our results today: neuralmagic.com/blog/latest-ml…
Intel Showcases “AI Everywhere” Strategy in MLPerf Inferencing v3.1 #MLPerf @MLCommons @intel #HPC ow.ly/7Pak50PN4Rj
NVIDIA's submission to the new #MLPerf Network division #datacenter benchmark highlights NVIDIA InfiniBand and GPUDirect RDMA capabilities for end-to-end inference. Learn more: nvda.ws/3O5wfgG
📣 MLPerf 훈련 벤치마크 전 항목에서 최고 성능을 달성한 NVIDIA Blackwell! NVIDIA Blackwell 아키텍처 기반 #AI 플랫폼은 유일하게 #MLPerf 훈련 v5.0의 모든 벤치마크 항목에 결과를 제출했는데요. 이 플랫폼은 LLM부터 추천 시스템, 그래프 신경망에 이르기까지 모든 영역에서 탁월한 성능과…
Gaudi2, 4th Gen Xeon Show Strength in MLPerf Training 3.1, but still Trail Nvidia #MLPerf hpcwire.com/2023/11/10/gau…
The latest #MLperf benchmarks are out. Not unsurprisingly, Nvidia was the leader across many categories. ow.ly/qifV50PRIlx #HPC #HPCperformance #GPUs
👀 In #MLPerf Training v4.0, we set new #generativeAI training performance records and continued to deliver the highest performance on every workload✨🏆 Technical Deep Dive ➡️nvda.ws/3z7h2X6 Performance delivered using the full stack of NVIDIA software and hardware.
Compound sparsity FTW! 💯 Neural Magic's recent #MLPerf benchmarks show a 92% more energy-efficient NLP execution compared to other providers. ♻️ Compound sparsity techniques and smart inferencing in DeepSparse are pushing the boundaries of #AI efficiency! #Sustainability
Forbes article highlighting how NVIDIA dominated AI benchmarks in the latest #MLPerf tests with the NVIDIA H200 Tensor Core GPU and TensorRT-LLM software. www-forbes-com.cdn.ampproject.org/c/s/www.forbes…
Learn how the NVIDIA GH200 Grace Hopper Superchip, which combines the NVIDIA Grace CPU and NVIDIA Hopper GPU, delivered leading performance on every #datacenter workload on industry-standard #MLPerf Inference v3.1 benchmarks. nvda.ws/3PJOOHY
TinyML benchmarks finally address real-world deployment with @MLCommons ' new streaming benchmark in @MLPerf Tiny v1.3. Tests 20-minute continuous wake word detection while measuring power and duty cycle. Technical deep dive: mlcommons.org/2025/09/mlperf… #MLPerf #TinyML #EdgeAI
#IA #MLPerf , las GPU #NVIDIA H200 Tensor Core que ejecutan el software TensorRT-LLM ofrecieron el rendimiento de inferencia Llama 2 70B más rápido en la prueba más grande de MLPerf de #generativeAI hasta la fecha.
In the latest #MLPerf benchmarks, NVIDIA H200 Tensor Core GPUs running TensorRT-LLM software delivered the fastest Llama 2 70B inference performance in MLPerf's biggest test of #generativeAI to date. nvda.ws/3TWzWrW
Something went wrong.
Something went wrong.
United States Trends
- 1. Jonathan Taylor 14.3K posts
- 2. Falcons 27.1K posts
- 3. Colts 43.8K posts
- 4. Daniel Jones 9,313 posts
- 5. Penix 9,541 posts
- 6. Bijan 6,067 posts
- 7. Mooney 3,682 posts
- 8. #ForTheShoe 3,322 posts
- 9. Raheem Morris 3,171 posts
- 10. Tyler Warren 1,952 posts
- 11. Drake London 4,005 posts
- 12. Liverpool 171K posts
- 13. Max B 19K posts
- 14. Doku 47.8K posts
- 15. Konate 17.8K posts
- 16. Zac Robinson 1,338 posts
- 17. Pitts 4,102 posts
- 18. Go Bills 7,366 posts
- 19. $LMT $450.50 Lockheed F-35 N/A
- 20. Scott Hanson N/A