once a scientific paradigm has been established, improving sota is just a matter of puzzle-solving and RL on LLMs is very good at solving puzzles
Dear NeurIPS reviewers, please be reminded to delete the GPT prompts next time :)
I'm writing up a blog post about Markdown streaming, because I think it is an interesting topic. I reckon there will be almost a dozen people who agree with me!
Since it's summer, and more or less internship and tech interview season, I made all 30 chapters of my Machine Learning Q and AI book freely available for the summer: sebastianraschka.com/books/ml-q-and… Hope it’s helpful! Happy reading, and good luck if you are interviewing!
Hypixel SkyBlock Foraging is now available for public testing on the Hypixel Alpha Network! To thank you guys for being so patient, we are giving away 16,400 SkyBlock Gems (approx. $100 value on the Hypixel Network store 👀) to 5 winners! ⭐️ To enter, you must like and retweet…
🚀 Excited to share the most inspiring work I’ve been part of this year: "Learning to Reason without External Rewards" TL;DR: We show that LLMs can learn complex reasoning without access to ground-truth answers, simply by optimizing their own internal sense of confidence. 1/n
today my roommate, who was working on research in another field, handed in his resignation and quit research overall he iterated that research is most probably a 24/7 job, where people just work because they love to do that work all over the day, and the main incentive is always…
1/3 @geoffreyhinton once said that the future depends on some graduate student being suspicious of everything he says (via @lexfridman). He also said was that it was impossible to find biologically plausible approaches to backprop that scale well: radical.vc/geoffrey-hinto….
By far the greatest source of anxiety in my life is the voice in my head saying I should be working right now. I should be building/coding/marketing/making videos/researching/experimenting/hunched over my laptop working Ever since I started this solopreneur thing last year…
holy shit MIT researchers just turned skin cells directly into neurons without stem cell intermediate, 100-fold efficiency boost, and they actually worked when transplanted into mouse brains 1/
The Pirate Bay co-founder Carl Lundstrom has died in a plane crash at age 64
This is one of the wildest git diffs I've ever seen
HOLY SHIT IT'S HAPPENING AI can now write genomes from scratch. Arc Institute an NVIDIA just published Evo-2, the largest AI model for biology, trained on 9.3 trillion DNA base pairs spanning the entire tree of life. it doesn’t just analyze genomes. it creates them 1/
Introducing deep-research - my own open source implementation of OpenAI's new Deep Research agent. Get the same capability without paying $200. You can even tweak the behavior of the agent with adjustable breadth and depth. Run it for 5 min or 5 hours, it'll auto adjust.
🔬Research ideation is hard: After the spark of a brilliant initial idea, much work is still needed to further develop it into a well-thoughtout project by iteratively expanding and refining the initial idea and grounding it to relevant literature. How can we better support this?
either you accept you're not meant for more and you settle at the level you belong at (many such cases) or you reject that notion and do what is necessary to become more there is literally nothing else to it
How do LLMs learn to reason from data? Are they ~retrieving the answers from parametric knowledge🦜? In our new preprint, we look at the pretraining data and find evidence against this: Procedural knowledge in pretraining drives LLM reasoning ⚙️🔢 🧵⬇️
i understand you guys like lists LeNet (1989) LSTM (1997) Deep Belief Networks (2006) AlexNet (2012) Word2Vec (2013) GAN (2014) VGG16/19 (2014) InceptionNet (2014) VAE (2014) ResNet (2015) U-Net (2015) YOLO v1 (2015) FastRCNN (2015) FasterRCNN (2015) Neural Style Transfer…
these are some computer vision papers that everyone must go through atleast once: 1. ResNets: arxiv.org/pdf/1512.03385… 2. YOLO: arxiv.org/abs/1506.02640 3. DeConv: lxu.me/mypapers/dcnn_… 4. GAN: arxiv.org/abs/1406.2661 5. Unet: arxiv.org/abs/1505.04597 6. Focal Loss:…
Grok-3 just proved Riemann's hypothesis. We decided to pause its training to check its proof, and if the proof is correct, training won't be resumed, as the AI is deemed so smart that it becomes a danger to humanity.
United States 趨勢
- 1. #WWERaw 143K posts
- 2. Raiders 70.1K posts
- 3. Cowboys 43K posts
- 4. Gunther 18.7K posts
- 5. Pickens 16.4K posts
- 6. Geno 14.2K posts
- 7. Jeanty 5,496 posts
- 8. Chip Kelly 1,887 posts
- 9. Pete Carroll 2,177 posts
- 10. Roman 67.7K posts
- 11. Roman 67.7K posts
- 12. Dolph 37.5K posts
- 13. AJ Lee 16.6K posts
- 14. #RawOnNetflix 3,443 posts
- 15. Becky 60.3K posts
- 16. Maxxine 17K posts
- 17. Quinnen Williams 4,211 posts
- 18. Mark Davis 1,022 posts
- 19. Sheamus 11.2K posts
- 20. War Games 4,525 posts
Something went wrong.
Something went wrong.