你可能會喜歡
Enjoyed spending time in London yesterday with top industry analysts and other @RedHat leaders. It was great to discuss how our open hybrid cloud strategy is helping customers navigate a complex IT landscape. Our open approach for AI is a critical part of that journey.

AI is changing how we do everything. Lots of discussion about AI in the sw development But not enough about how open source sw development will evolve with AI. Let's have that discussion. redhat.com/en/blog/ai-ass…
Using light to transmit data rather than relying on electronic components could slash latency itpro.com/infrastructure…
vLLM day zero support for Qwen3-Next...Hybrid attention, sparse MoE, and multi-token prediction...keep pushing fwd!
Qwen3-Next dropped yesterday and you can run it with Red Hat AI today. ✅ Day-zero support in vLLM ✅ Day-one deployment with Red Hat AI Step-by-step guide: developers.redhat.com/articles/2025/… The future of AI is open.
Check out my new Technically Speaking episode with Bernd Greifeneder from @Dynatrace. We're talking about taming AI agents with observability and why being able to trust these systems is so critical. Give it a listen. youtube.com/watch?v=h-L0R_…
youtube.com
YouTube
Taming AI agents with observability ft. Bernd Greifeneder | Technic...
It's awesome to see how we're making the OS feel faster and more secure for our customers with things like Lightspeed, Insights, image mode, and post-quantum security. This is what it’s all about—giving people the tools to move at the speed of innovation. youtube.com/watch?v=dhWW2U…
.@kernelcdub and Carlos Costa explain how a "shared vision" allows an entire industry, from hardware providers to cloud companies, to solve common pain points together and create a "growing pie" for everyone. Hear more about the power of open source collaboration in the full…
Ever wonder what the 'v' in vLLM stands for? 💡 @kernelcdub and @nickhill33 explain how "virtual" memory and PagedAttention make AI inference more efficient by solving GPU memory fragmentation. Tune into the full Technically Speaking episode to learn more about optimizing LLMs:…
Nick Hill digs into the details of vLLM with me on Technically Speaking. Helpful in understanding why vLLM is so important in high performance, open source AI inferencing
How do you solve AI's biggest performance hurdles? On Technically Speaking, @kernelcdub & Nick Hill dive into vLLM, exploring how techniques like PagedAttention solve memory bottlenecks & accelerate inference: red.ht/4lDjJ5P.

Great discussion with @AMD about how we see future of AI and the importance of openness and choice.
It's tools, it's models, it's agents...the future of #AI is open source and collaboration. We have to build this together.
"The future of AI is going to be open sourced." In this clip, @kernelcdub of @RedHat explains why open source, open collaboration, and global ecosystems will define the next wave of AI, and how Red Hat empowers developers everywhere
Rapid fire buzzword decoding with @addvin
What's the first thing that comes to mind when tech leaders hear today's top AI & enterprise buzzwords? 🤔 Watch Red Hat CTO @kernelcdub & AI CTO @addvin tackle them in a quick-fire word association game! Don't miss the latest episode of Technically Speaking 🎙️for their deep-dive…
Hope you enjoy this great discussion I had with @addvin on Technically Speaking. The future of open source AI inference!
AI Inference at scale? 🤔 Open source to the rescue! 💡 Dive into #vLLM, distributed systems, and the path to practical #AI in the enterprise with @kernelcdub and @addvin on the latest "Technically Speaking." red.ht/3FEEZJ6

AI Inference at scale? 🤔 Open source to the rescue! 💡 Dive into #vLLM, distributed systems, and the path to practical #AI in the enterprise with @kernelcdub and @addvin on the latest "Technically Speaking." red.ht/3FEEZJ6

Nice nod to Ramalama here...containers to make AI easy (aka boring): networkworld.com/article/381649…
Here's how we're achieving R1 like reasoning with small models leveraging probabalistic inference-time scaling w/out using DeepSeek or derivatives. Results are impressive! MATH w/ Llama 8B approaches GPT-4o, and w/ Qwen2.5 Math 7B Instruct hits o1 level. red-hat-ai-innovation-team.github.io/posts/r1-like-…
Inference-time scaling brings smaller LLMs to o1 level capabilities. This is why we're so excited about the potential of smaller, open source models. Awesome work @ishapuri101 and @RedHat AI Innovation team!
[1/x] can we scale small, open LMs to o1 level? Using classical probabilistic inference methods, YES! Joint @MIT_CSAIL / @RedHat AI Innovation Team work introduces a particle filtering approach to scaling inference w/o any training! check out …abilistic-inference-scaling.github.io
![ishapuri101's tweet image. [1/x] can we scale small, open LMs to o1 level? Using classical probabilistic inference methods, YES! Joint @MIT_CSAIL / @RedHat AI Innovation Team work introduces a particle filtering approach to scaling inference w/o any training! check out …abilistic-inference-scaling.github.io](https://pbs.twimg.com/media/GjERBNebIAE6IaO.jpg)
Nice example of getting started with Triton on AMD GPUs. torch.compile and start playing around! next.redhat.com/2024/12/17/get…
We agree, @kernelcdub: the future of AI is open source 👐 Onsite at #AWSreInvent? Stop by booth #844 to see how you can transform your business with open technology, open culture, and open processes: red.ht/3B0QA2G
United States 趨勢
- 1. Jets 101K posts
- 2. Justin Fields 20K posts
- 3. Broncos 43.3K posts
- 4. James Franklin 31.5K posts
- 5. Drake Maye 8,303 posts
- 6. Aaron Glenn 8,332 posts
- 7. Puka 6,629 posts
- 8. George Pickens 3,561 posts
- 9. Penn State 45.7K posts
- 10. Cooper Rush 1,650 posts
- 11. Steelers 36.3K posts
- 12. Sean Payton 3,601 posts
- 13. Tyler Warren 1,966 posts
- 14. London 202K posts
- 15. TMac 1,587 posts
- 16. Jerry Jeudy N/A
- 17. #DallasCowboys 1,957 posts
- 18. #Pandu N/A
- 19. Garrett Wilson 4,975 posts
- 20. #HereWeGo 2,545 posts
你可能會喜歡
-
Matt Hicks
@matthicksj -
Daniel Walsh
@rhatdan -
Red Hat Community
@redhatopen -
Ashesh Badani
@asheshbadani -
Red Hat Summit
@RedHatSummit -
Scott McCarty
@fatherlinux -
Daniel Oh
@danieloh30 -
Red Hat Events
@RedHatEvents -
OpenShift Commons
@openshiftcommon -
brian stevens
@addvin -
Adam Miller (He/Him) @[email protected]
@TheMaxamillion -
Itamar Heim
@itamarheim -
Kimberly Craven
@kimberlycraven -
Stephanie Wonderlick
@stephstad -
Andrew Block
@sabre1041
Something went wrong.
Something went wrong.