itunpredictable's profile picture. blog poaster @amplifypartners @readtechnically

sisyphus bar and grill

@itunpredictable

blog poaster @amplifypartners @readtechnically

AI researchers in San Diego this week

itunpredictable's tweet image. AI researchers in San Diego this week

Yesterday Antithesis announced their $105M Series A (yea) A few weeks ago I worked with their team to write a beginner's guide to Deterministic Simulation Testing, the technology behind their product Coincidence? Probably

The most interesting thing going on outside of AI today is Deterministic Simulation Testing. Among systems/infra people this is well understood to be a major technological shift; it's how FoundationDB and @TigerBeetleDB short-cutted the usual 10 year DB buildout. The basic idea…



Whether this launch goes well or not, I'll always have Parth.

itunpredictable's tweet image. Whether this launch goes well or not, I'll always have Parth.

UNDERSTAND AI OR DIE TRYING Announcing the AI Reference, the best, fastest, and free-est way to get smart on the fundamentals of AI models and how they work. Stuff like RAG, RLHF, context, and pre-training. It’s totally free and you can dive in here. technically.dev/ai-reference



sisyphus bar and grill podał dalej

AI today is kind of like a magic eight ball. The AI Reference breaks it open. But instead of weird sludgy ink, it's useful breakdowns of common AI concepts like pre-training and RAG.

UNDERSTAND AI OR DIE TRYING Announcing the AI Reference, the best, fastest, and free-est way to get smart on the fundamentals of AI models and how they work. Stuff like RAG, RLHF, context, and pre-training. It’s totally free and you can dive in here. technically.dev/ai-reference



sisyphus bar and grill podał dalej

With the release of Deepseek 3.2 yesterday + Olmo 3 two weeks ago, people are talking about scaling RL again. One under appreciated hardware detail: How do you maximizing hardware efficiency during RL training between generators and trainers? For many top labs, as much as 90%…


Comrade, there are rumblings that you are not sufficiently bitter lesson pilled.


sisyphus bar and grill podał dalej

Rich Sutton keynote at neurips right now

narayanarjun's tweet image. Rich Sutton keynote at neurips right now

sisyphus bar and grill podał dalej

YOU'RE GOD DAMN RIGHT I ORDERED THE CODE RED Son, we live in a city that is zoned single family and those houses have to be bought with secondaries. Who's going to sell those secondaries? You? You, lieutenant roon? I have a greater responsibility than you can possibly fathom.…

narayanarjun's tweet image. YOU'RE GOD DAMN RIGHT I ORDERED THE CODE RED

Son, we live in a city that is zoned single family and those houses have to be bought with secondaries. Who's going to sell those secondaries? You? You, lieutenant roon? I have a greater responsibility than you can possibly fathom.…

sisyphus bar and grill podał dalej

1/NeurIPS is this week. On my way to San Diego, I’ve been revisiting last year’s talks and noticing how much they shaped my investing. The loudest takeaway then was that ASR, transcription, and voice AI were basically “solved.”

Gradium is out of stealth to solve voice. We raised $70M and after only 3 months we’re releasing our transcription and synthesis products to power the next generation of voice AI.



sisyphus bar and grill podał dalej

really enjoyed chatting with @itunpredictable about scaling RL and some of the issues we ran into on Olmo 3.

finbarrtimbers's tweet image. really enjoyed chatting with @itunpredictable about scaling RL and some of the issues we ran into on Olmo 3.

sisyphus bar and grill podał dalej

Had a great chat with Justin about scaling RL!

With the release of Deepseek 3.2 yesterday + Olmo 3 two weeks ago, people are talking about scaling RL again. One under appreciated hardware detail: How do you maximizing hardware efficiency during RL training between generators and trainers? For many top labs, as much as 90%…



sisyphus bar and grill podał dalej

Maximizing generation efficiency during RL training:

With the release of Deepseek 3.2 yesterday + Olmo 3 two weeks ago, people are talking about scaling RL again. One under appreciated hardware detail: How do you maximizing hardware efficiency during RL training between generators and trainers? For many top labs, as much as 90%…



Loading...

Something went wrong.


Something went wrong.