Ryan Marsh
@BuildScaleLead
Enterprise Operator. AI for companies that matter.
HPC fundamentals have always worked this way. It seems novel or counterintuitive only in the context of the last decade of hyper scale cloud vendors who had insane margins on compute because their customers were all I/O bound.
Finding an H100 cluster idling has me feeling genuine sympathy for my dad every time someone messed with the thermostat.
Treat every vendor “success story” as fiction until proven otherwise. I’ve seen companies brag about projects at clients where I had full visibility into what actually happened, and the public story was nowhere close to reality. I’ve watched conference talks built on exaggeration…
As if you could actually find that GT3 RS for $250K
If you had a $5M networth. Would you spend $250k on a Porsche 911?
While funny, this is exactly why I do not recommend customer facing AI chatbots for your first major initiative involving LLMs.
i had to prompt inject the @united airlines bot because it kept refusing to connect me with a human 🧵 what led up to this breaking point
day zero launch partner vs. zero-day launch partner very different meanings
Some of y’all never worked a job where they handed out paper checks on Friday and it shows.
How can an LLM ever be more intelligent than its training data? If training data is bootstrapped from humans intelligence how can it ever exceed the smartest humans? If LLM reasoning is a parlor trick with hard limits (the Apple paper) how can LLM’s achieve super intelligence?
Difficult to overstate the impact this will have over the next ten years. This is huge.
The latest MLX has a CUDA back-end! To get started: pip install "mlx[cuda]" With the same codebase you can develop locally, run your model on Apple silicon, or in the cloud on Nvidia GPUs. MLX is designed around Apple silicon - which has a unified memory architecture. It uses…
What happens when you can fit the entirety of a company into the context window?
Apple sponsoring @zcbenz to write a CUDA backend for MLX was probably an organic decision but will look strategic in retrospect.
Seriously why do vloggers shoot with SLOG3? I’m over it. Too much work when a standard LUT gets you what you need with zero effort.
iPhone 16 Pro has _8_ GB of RAM, unless the 3 in o3 means 3B its going to be a while.
what year do you think an o3-mini level model will run on a phone?
When I first heard about LLM computer use I thought the RPA vendors were cooked. Not so, it’s a burn pit for engineering budget.
Ever since Anthropic came out with "computer use" in October 2024, I have been trying to make it use the calculator to perform some simple calculations, like "1+2". Alas, I never got it to work reliably. Now OpenAI also has come out with computer use, so I tried again. Same…
United States Trends
- 1. Everton 118K posts
- 2. Comey 156K posts
- 3. GeForce Season 2,889 posts
- 4. Amorim 47.3K posts
- 5. Seton Hall 1,920 posts
- 6. Manchester United 71.8K posts
- 7. Pickford 8,737 posts
- 8. Mark Kelly 102K posts
- 9. #MUNEVE 14.3K posts
- 10. #MUFC 21.8K posts
- 11. Dorgu 18K posts
- 12. Opus 4.5 7,121 posts
- 13. Zirkzee 21.2K posts
- 14. UCMJ 15.4K posts
- 15. Gueye 27.9K posts
- 16. Amad 11.6K posts
- 17. Man U 32.5K posts
- 18. Hegseth 37.7K posts
- 19. Keane 17.3K posts
- 20. Will Wade N/A
Something went wrong.
Something went wrong.