mikecarroll_eng's profile picture. Engineer. Previously @Facebook

Mike Carroll

@mikecarroll_eng

Engineer. Previously @Facebook

Mike Carroll 님이 재게시함

50 LLM Projects with Source Code to Become a Pro 1. Beginner-Level LLM Projects → Text Summarizer using OpenAI API → Chatbot for Customer Support → Sentiment Analysis with GPT Models → Resume Optimizer using LLMs → Product Description Generator → AI-Powered Grammar…

e_opore's tweet image. 50 LLM Projects with Source Code to Become a Pro

1. Beginner-Level LLM Projects

→ Text Summarizer using OpenAI API
→ Chatbot for Customer Support
→ Sentiment Analysis with GPT Models
→ Resume Optimizer using LLMs
→ Product Description Generator
→ AI-Powered Grammar…

Mike Carroll 님이 재게시함

Mike Carroll 님이 재게시함

I left my plans for weekend to read this recent blog from HuggingFace 🤗 on how they maintain the most critical AI library: transformers. → 1M lines of Python, → 1.3M installations, → thousands of contributors, → a true engineering masterpiece, Here's what I learned:…

Hesamation's tweet image. I left my plans for weekend to read this recent blog from HuggingFace 🤗 on how they maintain the most critical AI library: transformers.

→ 1M lines of Python,
→ 1.3M installations,
→ thousands of contributors,
→ a true engineering masterpiece, 

Here's what I learned:…

Mike Carroll 님이 재게시함

Excited to release new repo: nanochat! (it's among the most unhinged I've written). Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single,…

karpathy's tweet image. Excited to release new repo: nanochat!
(it's among the most unhinged I've written).

Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single,…

Mike Carroll 님이 재게시함

You're not depressed, you just lost your quest.

NTFabiano's tweet image. You're not depressed, you just lost your quest.

Mike Carroll 님이 재게시함

🔥Free Google Collab notebooks to implement every Machine Learning Algorithm from scratch Link in comment

victor_explore's tweet image. 🔥Free Google Collab notebooks to implement every Machine Learning Algorithm from scratch Link in comment

Mike Carroll 님이 재게시함

how i got here: > i used to be and still tend towards having an obsessive/addictive personality > put many years of my life into video games > it was only 2 years ago i started to turn that around because i got other interests and starting really looking forward to the future >…


Mike Carroll 님이 재게시함

found a repo that has a massive collection of Machine Learning system design case studies used in the real world, from Stripe, Spotify, Netflix, Meta, GitHub, Twitter/X, and much more link in replies

d4rsh_tw's tweet image. found a repo that has a massive collection of Machine Learning system design case studies used in the real world, from Stripe, Spotify, Netflix, Meta, GitHub, Twitter/X, and much more

link in replies

Mike Carroll 님이 재게시함

Copy-pasting PyTorch code is fast — using an AI coding model is even faster — but both skip the learning. That's why I asked my students to write by hand ✍️. 🔽 Download: byhand.ai/pytorch After the exercise, my students can understand what every line really does and…


Mike Carroll 님이 재게시함

70 Python Projects with Source code for Developers Step 1: Beginner Foundations → Hello World Web App → Calculator (CLI) → To-Do List CLI → Number Guessing Game → Countdown Timer → Dice Roll Simulator → Coin Flip Simulator → Password Generator → Palindrome Checker →…


Mike Carroll 님이 재게시함

everything you need to get started in one repo

elliotarledge's tweet image. everything you need to get started in one repo

Mike Carroll 님이 재게시함

System prompts are getting outdated! Here's a counterintuitive lesson from building real-world Agents: Writing giant system prompts doesn't improve an Agent's performance; it often makes it worse. For example, you add a rule about refund policies. Then one about tone. Then…

akshay_pachaar's tweet image. System prompts are getting outdated!

Here's a counterintuitive lesson from building real-world Agents:

Writing giant system prompts doesn't improve an Agent's performance; it often makes it worse.

For example, you add a rule about refund policies. Then one about tone. Then…

Mike Carroll 님이 재게시함

Nethack is the best benchmark you've never heard of. I'd say that it makes ALE look like a toy, but well... it is

Introducing Scalable Option Learning (SOL☀️), a blazingly fast hierarchical RL algorithm that makes progress on long-horizon tasks and demonstrates positive scaling trends on the largely unsolved NetHack benchmark, when trained for 30 billion samples. Details, paper and code in >



Mike Carroll 님이 재게시함

it's insane to me how little attention the llm.q repo has it's a fully C/C++/CUDA implementation of multi-gpu (zero + fsdp), quantized LLM training with support for selective AC it's genuinely the coolest OSS thing I've seen this year (what's crazier is 1 person wrote it!)

a1zhang's tweet image. it's insane to me how little attention the llm.q repo has

it's a fully C/C++/CUDA implementation of multi-gpu (zero + fsdp), quantized LLM training with support for selective AC

it's genuinely the coolest OSS thing I've seen this year (what's crazier is 1 person wrote it!)

Mike Carroll 님이 재게시함

Finally had a chance to listen through this pod with Sutton, which was interesting and amusing. As background, Sutton's "The Bitter Lesson" has become a bit of biblical text in frontier LLM circles. Researchers routinely talk about and ask whether this or that approach or idea…

.@RichardSSutton, father of reinforcement learning, doesn’t think LLMs are bitter-lesson-pilled. My steel man of Richard’s position: we need some new architecture to enable continual (on-the-job) learning. And if we have continual learning, we don't need a special training…



As the divide between the super-rich and the rest widens, this strategy becomes increasingly relevant.


Great point that is missed by a lot of LLM “sellers”

I expect AI to produce a lot of value while trying to make a coherent body of knowledge out of its training data, but on its own, AI is limited to a sort of ancient Greek philosophical process, where the method is rhetoric, rather than the experimental process of science. This…



Mike Carroll 님이 재게시함

Use this structure for your SAAS Landing page Thank me later

him_uiux's tweet image. Use this structure for your SAAS Landing page 

Thank me later

Mike Carroll 님이 재게시함

real-time multi-person pose detection for body, face, hands, and feet

tom_doerr's tweet image. real-time multi-person pose detection for body, face, hands, and feet

Mike Carroll 님이 재게시함

⏱️ From 10 hours to under 1 minute. @ParaboleAI achieved a 1,000x speedup in industrial optimization by running causal AI on NVIDIA GH200 Grace Hopper + Gurobi. This leap is enabling real-time, explainable decisions at massive scale. 🔗 nvda.ws/46nCtBH

NVIDIAAI's tweet image. ⏱️ From 10 hours to under 1 minute.

@ParaboleAI achieved a 1,000x speedup in industrial optimization by running causal AI on NVIDIA GH200 Grace Hopper + Gurobi.

This leap is enabling real-time, explainable decisions at massive scale.

🔗 nvda.ws/46nCtBH

Loading...

Something went wrong.


Something went wrong.