BuildScaleLead's profile picture. Enterprise Operator. AI for companies that matter.

Ryan Marsh

@BuildScaleLead

Enterprise Operator. AI for companies that matter.

Pinned

HPC fundamentals have always worked this way. It seems novel or counterintuitive only in the context of the last decade of hyper scale cloud vendors who had insane margins on compute because their customers were all I/O bound.

Jensen Huang says that Nvidia’s full AI infrastructure package (chip, networking, data centre) is so efficient that competitors can price their chips at $0 and Nvidia would still be the better option.



Finding an H100 cluster idling has me feeling genuine sympathy for my dad every time someone messed with the thermostat.


Treat every vendor “success story” as fiction until proven otherwise. I’ve seen companies brag about projects at clients where I had full visibility into what actually happened, and the public story was nowhere close to reality. I’ve watched conference talks built on exaggeration…


Capital allocation is a harsh mistress


As if you could actually find that GT3 RS for $250K

If you had a $5M networth. Would you spend $250k on a Porsche 911?

atlasx100's tweet image. If you had a $5M networth. Would you spend $250k on a Porsche 911?


While funny, this is exactly why I do not recommend customer facing AI chatbots for your first major initiative involving LLMs.

i had to prompt inject the @united airlines bot because it kept refusing to connect me with a human 🧵 what led up to this breaking point

itsandrewgao's tweet image. i had to prompt inject the @united airlines bot because it kept refusing to connect me with a human

🧵 what led up to this breaking point


day zero launch partner vs. zero-day launch partner very different meanings


Some of y’all never worked a job where they handed out paper checks on Friday and it shows.


How can an LLM ever be more intelligent than its training data? If training data is bootstrapped from humans intelligence how can it ever exceed the smartest humans? If LLM reasoning is a parlor trick with hard limits (the Apple paper) how can LLM’s achieve super intelligence?


Grok 4 is so far beyond GPT5 it’s embarrassing.


Difficult to overstate the impact this will have over the next ten years. This is huge.

The latest MLX has a CUDA back-end! To get started: pip install "mlx[cuda]" With the same codebase you can develop locally, run your model on Apple silicon, or in the cloud on Nvidia GPUs. MLX is designed around Apple silicon - which has a unified memory architecture. It uses…



What happens when you can fit the entirety of a company into the context window?


Apple sponsoring @zcbenz to write a CUDA backend for MLX was probably an organic decision but will look strategic in retrospect.


Seriously why do vloggers shoot with SLOG3? I’m over it. Too much work when a standard LUT gets you what you need with zero effort.


Ryan Marsh reposted

DSPy is pure magic


iPhone 16 Pro has _8_ GB of RAM, unless the 3 in o3 means 3B its going to be a while.

what year do you think an o3-mini level model will run on a phone?



Pioneer has their own tld? How??? global.pioneer/en/


Ryan Marsh reposted
jxmnop's tweet image.

When I first heard about LLM computer use I thought the RPA vendors were cooked. Not so, it’s a burn pit for engineering budget.

Ever since Anthropic came out with "computer use" in October 2024, I have been trying to make it use the calculator to perform some simple calculations, like "1+2". Alas, I never got it to work reliably. Now OpenAI also has come out with computer use, so I tried again. Same…

headinthebox's tweet image. Ever since Anthropic came out with "computer use" in October 2024, I have been trying to make it use the calculator to perform some simple calculations, like "1+2". Alas, I never got it to work reliably. 

Now OpenAI also has come out with computer use, so I tried again. Same…


United States Trends

Loading...

Something went wrong.


Something went wrong.