analyticsaurabh's profile picture. http://tora.bohita.com
First Fashion Recommendation ML @ Rent The Runway 🦄, Founded ML at Barnes & Nobles. Past, @Virevol, Unilever, HP, ...

Saurabh Bhatnagar

@analyticsaurabh

http://tora.bohita.com First Fashion Recommendation ML @ Rent The Runway 🦄, Founded ML at Barnes & Nobles. Past, @Virevol, Unilever, HP, ...

고정된 트윗

I just published “How I scaled Machine Learning to a Billion dollars: Strategy” medium.com/p/how-i-scaled…


Moondream is the best model in its class, default for fast and good.

"Some people asked me, why I didn't use Gemini. I tried to use it but the detection wasn't that good. Moondream was the best at this."

vikhyatk's tweet image. "Some people asked me, why I didn't use Gemini. I tried to use it but the detection wasn't that good. Moondream was the best at this."


You don’t have to do this alone Be a one man army with Tora

everyone loves the idea of being a solo founder until you're 6 months in, no team, building til 4am, and u realize u also need to do gtm and get users...



Saurabh Bhatnagar 님이 재게시함

The future of AI doesn't have to break the bank and destroy the environment to reach AGI!

Costs to reproduce: * ARC-AGI-1 Public: 9h 52m 6 * 2x8H100 * $8/hour  = $157.86 * ARC-AGI-1 Semi-private: 11h 23m* 2x8H100 * $8/hour  = $176.38 * ARC-AGI-1 Public: 9h 35m * 3x8H100 * $8/hour  = $216.58 * ARC-AGI-2 Semi-private: 10h 30m * 3x8H100 * $8/hour  = $252



Bites taken out by both Veo and Sora today Warrants a whiskey But have something up the sleeve


Every research lab will get these even if you have providers. Such a no brainer

the first demo was to use ICARE (arxiv.org/abs/2508.02808) on DGX Spark to compare radiology notes in a semantically meaningful way. this demo shows the potential of local supercomputers on clinicians' desks. (3/6)

kchonyc's tweet image. the first demo was to use ICARE (arxiv.org/abs/2508.02808) on DGX Spark to compare radiology notes in a semantically meaningful way. this demo shows the potential of local supercomputers on clinicians' desks. (3/6)
kchonyc's tweet image. the first demo was to use ICARE (arxiv.org/abs/2508.02808) on DGX Spark to compare radiology notes in a semantically meaningful way. this demo shows the potential of local supercomputers on clinicians' desks. (3/6)
kchonyc's tweet image. the first demo was to use ICARE (arxiv.org/abs/2508.02808) on DGX Spark to compare radiology notes in a semantically meaningful way. this demo shows the potential of local supercomputers on clinicians' desks. (3/6)
kchonyc's tweet image. the first demo was to use ICARE (arxiv.org/abs/2508.02808) on DGX Spark to compare radiology notes in a semantically meaningful way. this demo shows the potential of local supercomputers on clinicians' desks. (3/6)


I can’t understand how stupid people can be when they cheer for ‘their’ side I wonder if I’m stupid too


This is great Very excited for phase 2

analyticsaurabh's tweet image. This is great

Very excited for phase 2

Saurabh Bhatnagar 님이 재게시함

The word of 2025 AFAICT is "grind". But it's not meant to be a grind. If you don't find solving problems with code fun and interesting, then you might want to try a different career. Or maybe you need a break! (I love it, and it never feels like a grind.)

jeremyphoward's tweet image. The word of 2025 AFAICT is "grind".

But it's not meant to be a grind.

If you don't find solving problems with code fun and interesting, then you might want to try a different career. Or maybe you need a break!

(I love it, and it never feels like a grind.)

Field day: Went viral on Sora yesterday Made hotdog a trend (sorry!) Tested a few important things about recommendation systems Theory is nice but you have to play the game to see it. Now hibernating…o


Build a system Don't guess


When the work is beautiful and coherent It’s always only one person

Not a team, work of one excellent design engineer @roozm We’re growing the team tho neuralink.com/careers/apply/…



Two actually with a different hook (it decided) because you don't know what will work

This is what it launched Will it win every time? It should do better, but it doesn't matter Will we learn? Absolutely



This is what it launched Will it win every time? It should do better, but it doesn't matter Will we learn? Absolutely

This is Tora that launches consistent on brand video campaigns Consistent characters, beautifully audio mixed, etc Then analyze feedback And we iterate on it using itself This is one small but necessary part of…

analyticsaurabh's tweet image. This is Tora that launches consistent on brand video campaigns

Consistent characters, beautifully audio mixed, etc

Then analyze feedback 

And we iterate on it using itself

This is one small but necessary part of…


This is Tora that launches consistent on brand video campaigns Consistent characters, beautifully audio mixed, etc Then analyze feedback And we iterate on it using itself This is one small but necessary part of…

analyticsaurabh's tweet image. This is Tora that launches consistent on brand video campaigns

Consistent characters, beautifully audio mixed, etc

Then analyze feedback 

And we iterate on it using itself

This is one small but necessary part of…

Saurabh Bhatnagar 님이 재게시함

To generate 1K tokens, an LLM with 100 layers needs to go through 100*1,000 = 100,000 layers To generate any amount of tokens, TRM do 16 reasoning steps of 2*7*3=42 depth for a total of 16*42 = 672 layers


Full ML loops

Omohundro drives. the companies that focuses on self improvement win. vast majority of compute will continue to be used on ai progress. even major mathematical or scientific discovery is an interim production to raise more capital to do more self improvement



Loading...

Something went wrong.


Something went wrong.