You might like
i think LLMs are obviously not conscious because there was no selection pressure for them to be, but rather to mimic byproducts of consciousness humans are conscious because it was evolutionarily useful for us to be
we’re building an (o)pen (s)uperintelligence (s)tack
OpenAI realesed new paper. "Why language models hallucinate" Simple ans - LLMs hallucinate because training and evaluation reward guessing instead of admitting uncertainty. The paper puts this on a statistical footing with simple, test-like incentives that reward confident…
In era of pretraining, what mattered was internet text. You'd primarily want a large, diverse, high quality collection of internet documents to learn from. In era of supervised finetuning, it was conversations. Contract workers are hired to create answers for questions, a bit…
Introducing the Environments Hub RL environments are the key bottleneck to the next wave of AI progress, but big labs are locking them down We built a community platform for crowdsourcing open environments, so anyone can contribute to open-source AGI
Im indorsing Ali as a a rockstar developer. **this Indorsement was enabled by indorse.us**
It’s been 3 months since OpenAI released the Agents SDK and developers have built incredible things with it Here are the best demos/projects built with the OpenAI Agents SDK (#20 is my fav) 🧵 p.s. try @AgentOpsAI as a tracing provider for the Agents SDK (save for later)
"deep learning ... is the science of modelling functions and probability distributions in very high dimensions." - simon prince
sufficiently complex circuits need to go in the damn weights
i'm bullish on autoregressive language models but i'm fairly bearish on skill acquisition via pure in-context learning
Now would be such a good time to launch Libra
I have a few budget-neutral strategies for acquiring additional bitcoin...
BREAKING DeepSeek just let the world know they make $200M/yr at 500%+ profit margin. Revenue (/day): $562k Cost (/day): $87k Revenue (/yr): ~$205M This is all while charging $2.19/M tokens on R1, ~25x less than OpenAI o1. If this was in the US, this would be a >$10B company.
What is the analogue of next-token prediction for reinforcement learning? To get true generality, you want to be able to convert everything in the world to an environment+reward for training.
I don't have too too much to add on top of this earlier post on V3 and I think it applies to R1 too (which is the more recent, thinking equivalent). I will say that Deep Learning has a legendary ravenous appetite for compute, like no other algorithm that has ever been developed…
DeepSeek (Chinese AI co) making it look easy today with an open weights release of a frontier-grade LLM trained on a joke of a budget (2048 GPUs for 2 months, $6M). For reference, this level of capability is supposed to require clusters of closer to 16K GPUs, the ones being…
“Solana is the SBF chain” No, motherfucker It’s the President’s chain.
United States Trends
- 1. Penn State 20.5K posts
- 2. Mendoza 17.1K posts
- 3. Gus Johnson 5,149 posts
- 4. #iufb 3,642 posts
- 5. Omar Cooper 7,555 posts
- 6. $SSHIB 1,665 posts
- 7. Sunderland 147K posts
- 8. Sayin 62.7K posts
- 9. Jim Knowles N/A
- 10. Texas Tech 12.7K posts
- 11. James Franklin 7,126 posts
- 12. Happy Valley 1,659 posts
- 13. Arsenal 247K posts
- 14. WHAT A CATCH 10.7K posts
- 15. Iowa 17.9K posts
- 16. #UFCVegas111 2,572 posts
- 17. Charlie Becker N/A
- 18. Jeremiah Smith 2,488 posts
- 19. CATCH OF THE YEAR 4,127 posts
- 20. St. John 7,965 posts
Something went wrong.
Something went wrong.