MatXComputing's profile picture. MatX designs hardware tailored for the world’s best AI models: We dedicate every transistor to maximizing performance for large models. Join us: http://matx.com

MatX

@MatXComputing

MatX designs hardware tailored for the world’s best AI models: We dedicate every transistor to maximizing performance for large models. Join us: http://matx.com

Pinned

Introducing MatX: we design hardware tailored for LLMs, to deliver an order of magnitude more computing power so AI labs can make their models an order of magnitude smarter. Our hardware would make it possible to train GPT-4 and run ChatGPT, but on the budget of a small startup.…

MatXComputing's tweet image. Introducing MatX: we design hardware tailored for LLMs, to deliver an order of magnitude more computing power so AI labs can make their models an order of magnitude smarter.

Our hardware would make it possible to train GPT-4 and run ChatGPT, but on the budget of a small startup.…

MatX reposted

Join us in Waterloo to chat systems at Whoopsie Daisy Drinks: lu.ma/matx-waterloo


MatX reposted

I'll be in Toronto and Waterloo over the next week, I'd love to chat and tell you a bit more about what we're doing at MatX (and say hi); please feel free to reach out!


MatX reposted

Excited to say I joined @MatXComputing late last year! The team is exceptionally thoughtful and the problems are both difficult and fun: from µarch, compilers, and models, to the systems we are building.

jtvhk's tweet image. Excited to say I joined @MatXComputing late last year! The team is exceptionally thoughtful and the problems are both difficult and fun: from µarch, compilers, and models, to the systems we are building.

MatX reposted

MatX hardware will maximize intelligence per dollar for the world’s largest models. We are a team of 50+ and growing quickly. If you are passionate about building the best chips for LLMs, consider joining us. matx.com/jobs

matx.com

Jobs | MatX

MatX designs hardware tailored for the world's best AI models: we dedicate every transistor to maximizing performance for large models.


MatX reposted

MatX is designing chips and systems to 10x the computing power for the world’s largest AI workloads. Today, we are pleased to announce the closing of a >$100M Series A funding round led by @sparkcapital, with participation from @JaneStreetGroup, @danielgross and @natfriedman,…


MatX reposted

1. Breakdown of DeepSeek V3 efficiency vs Llama 3: - Better: 11x fewer FLOPs per token, thanks to MoE [37B vs 405B activated params] - Better: 2x faster numerics [fp8 vs bf16 training] - Worse: 0.5x flops utilization [16% vs 33% end-to-end MFU*] - Neutral: similar hardware…


MatX reposted

NEW ODD LOTS: Two Veteran Chip Designers Have A Plan To Take On Nvidia @tracyalloway and I talked to @reinerpope and @MikeGunter_, both formerly of Alphabet, about their new company MatX that's aiming to build the ultimate semiconductor just for LLMs bloomberg.com/news/articles/…


MatX reposted

I really enjoyed talking about the process and business of semiconductor design with @tracyalloway and @TheStalwart on the Odd Lots podcast. Joe and Tracy were wonderful hosts: They put me at ease and guided the conversation with the lightest of touch. We talked about what doing…

NEW ODD LOTS: Two Veteran Chip Designers Have A Plan To Take On Nvidia @tracyalloway and I talked to @reinerpope and @MikeGunter_, both formerly of Alphabet, about their new company MatX that's aiming to build the ultimate semiconductor just for LLMs bloomberg.com/news/articles/…



MatX reposted

MatX will be at MLSys. Come join us at our After Hours in Santa Clara to talk about chips, compilers, partitioning, and optimizing ML models for future hardware. Many of us will be there, including me and @mikegunter_. Tuesday May 14th at 4pm, see matx.com/meetmatx.


MatX reposted

We’re releasing seqax, a research-focused LLM codebase that is simple, explicit, and performs well on up to ~100 GPUs/TPUs. Everything you need to edit, from the math, to parallelism, to memory footprint, is all there in 500 lines of JAX code. 🧵 github.com/MatX-inc/seqax


MatX reposted

very interesting observation re: aggregation / disaggregation dynamics for startups: "Inside of Google, there were lots of people who wanted changes to the chips for all sorts of things, and it was difficult to focus just on LLMs"


MatX reposted

Pleased to be investing in MatX, building AI chips with breakthrough capability: bloomberg.com/news/articles/…


MatX reposted

Given exponential increase in training costs, compute multipliers might become the most coveted secrets on earth. Some of those will be in torch.nn; many will be in silicon.

danielgross's tweet image. Given exponential increase in training costs, compute multipliers might become the most coveted secrets on earth. Some of those will be in torch.nn; many will be in silicon.

Pleased to be investing in MatX, building AI chips with breakthrough capability: bloomberg.com/news/articles/…



United States Trends

Loading...

Something went wrong.


Something went wrong.