Free YouTube channel for deep learning from UC Berkeley University. Learn deep learning with end-to-end projects:
Good engineers never stop learning. Here are some newsletters curating important resources each week:
New paper from Nvidia's alignment team. These are always worth reading, right up there with Llama for post training insights. Focuses on different types of reward model training with HelpSteer2 data.
Interesting work on reviving RNNs. arxiv.org/abs/2410.01201 -- in general the fact that there are many recent architectures coming from different directions that roughly match Transformers is proof that architectures aren't fundamentally important in the curve-fitting paradigm…
There's three parts. 1. Fitting as large of a network and as large of a batch-size as possible onto the 10k/100k/1m H100s -- parallelizing and using memory-saving tricks. 2. Communicating state between these GPUs as quickly as possible 3. Recovering from failures (hardware,…
Big O Notation 101: The Secret to Writing Efficient Algorithms
I love @GoogleColab because I can do dirty pip installs and then delete runtime once I am done. To do the same on my system: * python -m venv .venv * source .venv/bin/activate * pip install <> Do not tell me I am the only one!
This is a pretty awesome simple step-by-step guide showing you how to build your own PyTorch (a subset of ops supported) which requires just basic knowledge of C/C++/Python. towardsdatascience.com/recreating-pyt… The reason to walk through it is to better understand how some of the common…
So I noticed this on a security update. Once I uninstalled I got parity between battery and wall, it should be fixed soon with an update
I just got a copy of “Large Language Models: A Deep Dive.” I’ve been planning for a while to do just that with LLMs - delve deeper. ;) This books seems like an excelent up-to-date (as much as that is possible these days). Overview of this fascinating and important subject. Thanks…
working of RLHF. had fun learning the core, can't wait to write on this. preparing myself to decode anthropic's 'constitutional ai' paper.
🎊 It has arrived 🎊, the 2nd edition of my "Deep Generative Modeling" book. It has 100 new pages, 3 new chapters (incl. #LLMs) and new sections. It covers all deep generative models that constitute the core of all #GenerativeAI techs! Check it out: 💻tinyurl.com/mwj9dw83
Is thats @OpenAI o1 missing secret? @GoogleDeepMind developed a multi-turn chain of thought online reinforcement learning (RL) approach, SCoRe, to improve self-correction using entirely self-generated data. SCoRe achieves state-of-the-art self-correction, improving performance…
Comparison of lunarlake vs Z1 extreme handheld. This 4-core of lion cove and 4-ecore skymont are almost 1.5-2x faster then 8core/16threads Zen 4 at low 30w power.
500 TB Tutorials + Books + Courses + Trainings + Workshops -Data science -Python -AI -Cloud -BIG DATA -Data Analytics -BI -Google Cloud Training -Machine Learning -Deep Learning -Ethical Hacking To get it just - Follow me - like & RT it - Comment "Free"
I am BEYOND EXCITED to publish our interview with Krista Opsahl-Ong (@kristahopsalong) from @StanfordAILab! 🔥 Krista is the lead author of MIPRO, short for Multi-prompt Instruction Proposal Optimizer, and one of the leading developers and scientists behind DSPy! This was such…
You have 10000 coins. 9999 of them are fair; one is rigged so that it always lands on heads. You choose a coin at random and flip it 10 times; it’s heads all ten times. The coin is probably
United States 趨勢
- 1. New York 1.05M posts
- 2. New York 1.05M posts
- 3. Virginia 514K posts
- 4. $TAPIR 1,617 posts
- 5. #DWTS 40.3K posts
- 6. Texas 216K posts
- 7. Prop 50 174K posts
- 8. Clippers 9,340 posts
- 9. Cuomo 404K posts
- 10. TURN THE VOLUME UP 15.8K posts
- 11. Harden 9,755 posts
- 12. Bulls 35.8K posts
- 13. Sixers 12.8K posts
- 14. Jay Jones 98.6K posts
- 15. Ty Lue N/A
- 16. #Election2025 15.9K posts
- 17. Embiid 6,097 posts
- 18. Maxey 7,906 posts
- 19. Van Jones 2,041 posts
- 20. Isaiah Joe N/A
Something went wrong.
Something went wrong.