
Stanford NLP Group
@stanfordnlp
Computational Linguists—Natural Language—Machine Learning @chrmanning @jurafsky @percyliang @ChrisGPotts @tatsu_hashimoto @MonicaSLam @Diyi_Yang @StanfordAILab
قد يعجبك
🧠 Do Language Models Use Their Depth Efficiently? It was a pleasure attending today’s #BayArea #MachineLearning Symposium — where Prof. Christopher Manning gave an insightful and humorous talk on how #LLMs use their depth. linkedin.com/posts/jf-ai_ba…

SuperBPE (superbpe.github.io) featured in Jurafsky and Martin's new book in Computational Linguistics (web.stanford.edu/~jurafsky/slp3/). Amazing work by @alisawuffles and @jonathanhayase. Stay tuned for the follow-ups: arxiv.org/abs/2506.14123 and more...

At #DF25's "Openness Fuels AI Democratization" panel, @percyliang, co-founder of @SalesforceVC portfolio company @togethercompute, shared his thoughts on why open source matters for the future of AI. One of my favorite takeaways: Today's open-source AI feels like a similar…



At the Agentic AI panel at #BayLearn2025, Diyi Yang, Assistant Professor at Stanford, mentioned that while students use ChatGPT for their homework, more than making agents, we should make sure humans develop the skills such as critical thinking to survive.

The #ICCV2025 Artificial Social Intelligence Workshop will be a full-day event on Sunday, 10/19 in Room 317B Join us to discuss social reasoning, multimodality, and embodiment in socially-intelligent AI agents!

Excited to announce the Artificial Social Intelligence Workshop @ ICCV 2025 @ICCVConference Join us in October to discuss the science of social intelligence and algorithms to advance socially-intelligent AI! Discussion will focus on reasoning, multimodality, and embodiment.

💥New Paper Diversity is the key to everything: creative tasks and RL exploration. Yet, most LLMs suffered from mode collapse, always repeating the same answers. Our new paper introduces Verbalized Sampling, a general method to bypass this and unlock your model's true…

New paper: You can make ChatGPT 2x as creative with one sentence. Ever notice how LLMs all sound the same? They know 100+ jokes but only ever tell one. Every blog intro: "In today's digital landscape..." We figured out why – and how to unlock the rest 🔓 Copy-paste prompt: 🧵
[CL] Generation Space Size: Understanding and Calibrating Open-Endedness of LLM Generations S Yu, A Jabbar, R Hawkins, D Jurafsky... [Stanford University] (2025) arxiv.org/abs/2510.12699
![fly51fly's tweet image. [CL] Generation Space Size: Understanding and Calibrating Open-Endedness of LLM Generations
S Yu, A Jabbar, R Hawkins, D Jurafsky... [Stanford University] (2025)
arxiv.org/abs/2510.12699](https://pbs.twimg.com/media/G3VUd0WbQAAqW2e.png)
![fly51fly's tweet image. [CL] Generation Space Size: Understanding and Calibrating Open-Endedness of LLM Generations
S Yu, A Jabbar, R Hawkins, D Jurafsky... [Stanford University] (2025)
arxiv.org/abs/2510.12699](https://pbs.twimg.com/media/G3VUeHMa4AAtce1.jpg)
![fly51fly's tweet image. [CL] Generation Space Size: Understanding and Calibrating Open-Endedness of LLM Generations
S Yu, A Jabbar, R Hawkins, D Jurafsky... [Stanford University] (2025)
arxiv.org/abs/2510.12699](https://pbs.twimg.com/media/G3VUeVSaUAAitFT.jpg)
![fly51fly's tweet image. [CL] Generation Space Size: Understanding and Calibrating Open-Endedness of LLM Generations
S Yu, A Jabbar, R Hawkins, D Jurafsky... [Stanford University] (2025)
arxiv.org/abs/2510.12699](https://pbs.twimg.com/media/G3VUelBaAAAt9gO.jpg)
Self learning is indeed the future but the interesting part is I literally learn 90% basic knowledge of NLP and LLM by Stanford open courses (the syllabus were continuously updated every semester) I don’t know about Harvard but I just want to make sure are the “professors don’t…
Harvard and Stanford students tell me their professors don't understand AI and the courses are outdated. If elite schools can't keep up, the credential arms race is over. Self-learning is the only way now.
😆
It's true. When I took Stanford's CS224n (Natural Language Processing with Deep Learning) in 2018, they didn't even teach ChatGPT, Prompting, or MCP. How was I supposed to be prepared for the real world?
If anyone needs a video guide to Karpathy's nanochat, check out Stanford's CS336! It covers: - Tokenization - Resource Accounting - Pretraining - Finetuning (SFT/RLHF) - Overview of Key Architectures - Working with GPUs - Kernels and Tritons - Parallelism - Scaling Laws -…

Verbalized Sampling: Diversity isn't destroyed, just hidden. 📄Paper: arxiv.org/abs/2510.01171 🌐Blog & More: verbalized-sampling.com Team: @JiayiZhang0427 @simon_ycl @dch Anthony Sicilia, Michael Tomz, @chrmanning @shi_weiyan @StanfordNLP × Northeastern × WVU

New paper: You can make ChatGPT 2x as creative with one sentence. Ever notice how LLMs all sound the same? They know 100+ jokes but only ever tell one. Every blog intro: "In today's digital landscape..." We figured out why – and how to unlock the rest 🔓 Copy-paste prompt: 🧵
Accepted papers for the Reliable ML from Unreliable Data workshop @ NeurIPS 2025 are now live on OpenReview! Thrilled to have @tatsu_hashimoto join @abeirami @charapod on our panel!

This is all covered in stanford's CS 336, by the way, for anyone needing a guide
During her @UN speech, HAI Senior Fellow @YejinChoinka called on the global community to expand the AI frontier for all. Here, she emphasized the need for investing in bold science, building public AI infrastructure, and prioritizing capacity-building: hai.stanford.edu/policy/yejin-c…
🤖➡️📉 Post-training made LLMs better at chat and reasoning—but worse at distributional alignment, diversity, and sometimes even steering(!) We measure this with our new resource (Spectrum Suite) and introduce Spectrum Tuning (method) to bring them back into our models! 🌈 1/🧵

Today is my 10 year anniversary of starting AI research. The first thing I worked on was sentiment analysis. Most young AI researchers today never have heard of sentiment analysis. Instead, modern sentiment analysis is studying the sentiment of AI model behavior (e.g. sycophancy)
Instruction tuning has a hidden cost: ✅ Better at following instructions ❌ Narrower output distribution ❌ Worse in-context steerability We built 🌈 Spectrum Suite to investigate this and 🌈 Spectrum Tuning as an alternative post-training method —
🤖➡️📉 Post-training made LLMs better at chat and reasoning—but worse at distributional alignment, diversity, and sometimes even steering(!) We measure this with our new resource (Spectrum Suite) and introduce Spectrum Tuning (method) to bring them back into our models! 🌈 1/🧵

Lot of insights in @YejinChoinka's talk on RL training. Rip for next token prediction training (NTP) and welcome to Reinforcement Learning Pretraining (RLP). #COLM2025 No place to even stand in the room.

I am a linguist who is celebrating in the LLM era, and constantly bragging about my past insights.
United States الاتجاهات
- 1. Prince Andrew 23.3K posts
- 2. Duke of York 11.5K posts
- 3. No Kings 261K posts
- 4. zendaya 8,920 posts
- 5. Zelensky 65.7K posts
- 6. trisha paytas 3,717 posts
- 7. Apple TV 12.1K posts
- 8. #DoritosF1 N/A
- 9. Andrea Bocelli 15.7K posts
- 10. Arc Raiders 6,782 posts
- 11. #FursuitFriday 16.5K posts
- 12. Strasbourg 19.9K posts
- 13. #SELFIESFOROLIVIA N/A
- 14. #CashAppFriday N/A
- 15. Trevon Diggs 1,428 posts
- 16. F-bomb 1,847 posts
- 17. My President 51.5K posts
- 18. Karoline Leavitt 49.1K posts
- 19. TPOT 20 SPOILERS 9,142 posts
- 20. #FridayVibes 9,479 posts
قد يعجبك
-
Andrew Ng
@AndrewYNg -
Geoffrey Hinton
@geoffreyhinton -
Hugging Face
@huggingface -
Christopher Manning
@chrmanning -
Andrej Karpathy
@karpathy -
Ian Goodfellow
@goodfellow_ian -
Stanford AI Lab
@StanfordAILab -
Berkeley AI Research
@berkeley_ai -
PyTorch
@PyTorch -
Yann LeCun
@ylecun -
Soumith Chintala
@soumithchintala -
Jeff Dean
@JeffDean -
Sebastian Ruder
@seb_ruder -
Demis Hassabis
@demishassabis -
Chris Olah
@ch402
Something went wrong.
Something went wrong.