Houjun Liu
@houjun_liu
CS @stanford. Reasoning enjoyer @stanfordnlp, (PO)MDPs @SISLaboratory, and speech technologies @CarnegieMellon. AGI and Emacs are cool.
Can confirm it’s a good model 🫡
This is Gemini 3: our most intelligent model that helps you learn, build and plan anything. It comes with state-of-the-art reasoning capabilities, world-leading multimodal understanding, and enables new agentic coding experiences. 🧵
In the midst of all the angst about @iclr_conf review quality, let's not forget that the angst was only possible because ICLR makes the submissions and reviews "Open" (as OpenReview was originally meant to be--OpenReview was designed bespoke for ICLR..)--thus allowing for third…
why am i living in the timeline where Anderson Cooper knows what max activating exemplars are
In an extreme stress test, Antropic’s AI models resorted to blackmail to avoid being shut down. Research scientist Joshua Batson shows @andersoncooper how it happened and what they learned from it. cbsn.ws/49jVXJj
holy shit i can finally get my em-dashes back—they're pretty cool and i'm really honestly not a llm
Small-but-happy win: If you tell ChatGPT not to use em-dashes in your custom instructions, it finally does what it's supposed to do!
Check out SISL's @aiprof_mykel on NBC Bay Area talking about Waymo's driverless cars extending their service to highways: nbcbayarea.com/video/investig…
Did you know LMs don’t work instantly? Well apparently current evals didn’t, and this awesome work from SALT fixes it. Check it out
Your code changes while the agent plans. Users message while it thinks. Current AI agents freeze🧊 the world to reason about it. What if AI agents could think deeply without missing what's happening around them🔥? We propose a new agent paradigm: real-time reasoning. 🔗in🧵
claude code reeally makes programming much more bearable. I can do the business logic, and it could do all the logging or cli shenanigans.
Severance as a show about interpretability research in AI (a clip from a talk; YouTube link just below):
Our infini-gram mini paper received the Best Paper Award at #EMNLP2025 !! Really proud 🥹
Wanna 🔎 inside Internet-scale LLM training data w/o spending 💰💰💰? Introducing infini-gram mini, an exact-match search engine with 14x less storage req than the OG infini-gram 😎 We make 45.6 TB of text searchable. Read on to find our Web Interface, API, and more. (1/n) ⬇️
Good morning Suzhou! @amelia_f_hardy and I will be at @emnlpmeeting to present our work *TODAY, Hall C, 12:30PM; paper number 426* Come learn: ✅ why likelihood is important to simultaneously optimize with attack success ✅ online preference learning tricks for LM falsification
New Paper Day! For EMNLP findings—in LM red-teaming, we show you have to optimize for **both** perplexity and toxicity for high-probability, hard to filter, and natural attacks!
So @Stanford makes all of its students carry very good insurance OR get bullied into its 8k a year EPO plan. Wait, did I say EPO? Nope! They recently announced that to seek care even in network (including primary) you HAVE to see campus health first for referral. 8k a year HMO.
United States Trends
- 1. #DWTS 48.8K posts
- 2. Whitney 14.7K posts
- 3. Keyonte George 1,460 posts
- 4. Elaine 16.8K posts
- 5. LeBron 50.9K posts
- 6. Dylan 23.9K posts
- 7. Taylor Ward 2,917 posts
- 8. Orioles 5,888 posts
- 9. Grayson 6,189 posts
- 10. #WWENXT 15.4K posts
- 11. Winthrop 2,295 posts
- 12. Jordan 106K posts
- 13. Harrison Barnes 1,930 posts
- 14. Robert 81.9K posts
- 15. Connor Bedard 3,313 posts
- 16. Tatum 13.6K posts
- 17. Peggy 15.5K posts
- 18. Haiti 47K posts
- 19. #Blackhawks 1,868 posts
- 20. Angels 30.5K posts
Something went wrong.
Something went wrong.