stanfordnlp's profile picture. Computational Linguists—Natural Language—Machine Learning @chrmanning @jurafsky @percyliang @ChrisGPotts @tatsu_hashimoto @MonicaSLam @Diyi_Yang @StanfordAILab

Stanford NLP Group

@stanfordnlp

Computational Linguists—Natural Language—Machine Learning @chrmanning @jurafsky @percyliang @ChrisGPotts @tatsu_hashimoto @MonicaSLam @Diyi_Yang @StanfordAILab

Stanford NLP Group أعاد

🧠 Do Language Models Use Their Depth Efficiently? It was a pleasure attending today’s #BayArea #MachineLearning Symposium — where Prof. Christopher Manning gave an insightful and humorous talk on how #LLMs use their depth. linkedin.com/posts/jf-ai_ba…

junfanzhu98's tweet image. 🧠 Do Language Models Use Their Depth Efficiently?
It was a pleasure attending today’s #BayArea #MachineLearning Symposium — where Prof. Christopher Manning gave an insightful and humorous talk on how #LLMs use their depth. linkedin.com/posts/jf-ai_ba…

Stanford NLP Group أعاد

SuperBPE (superbpe.github.io) featured in Jurafsky and Martin's new book in Computational Linguistics (web.stanford.edu/~jurafsky/slp3/). Amazing work by @alisawuffles and @jonathanhayase. Stay tuned for the follow-ups: arxiv.org/abs/2506.14123 and more...

sewoong79's tweet image. SuperBPE (superbpe.github.io) featured in Jurafsky and Martin's new book in Computational Linguistics (web.stanford.edu/~jurafsky/slp3/). Amazing work by @alisawuffles and @jonathanhayase. Stay tuned for the follow-ups: arxiv.org/abs/2506.14123 and more...

Stanford NLP Group أعاد

At #DF25's "Openness Fuels AI Democratization" panel, @percyliang, co-founder of @SalesforceVC portfolio company @togethercompute, shared his thoughts on why open source matters for the future of AI. One of my favorite takeaways: Today's open-source AI feels like a similar…

Spyzguyz's tweet image. At #DF25's "Openness Fuels AI Democratization" panel, @percyliang, co-founder of @SalesforceVC portfolio company @togethercompute, shared his thoughts on why open source matters for the future of AI.

One of my favorite takeaways: Today's open-source AI feels like a similar…
Spyzguyz's tweet image. At #DF25's "Openness Fuels AI Democratization" panel, @percyliang, co-founder of @SalesforceVC portfolio company @togethercompute, shared his thoughts on why open source matters for the future of AI.

One of my favorite takeaways: Today's open-source AI feels like a similar…
Spyzguyz's tweet image. At #DF25's "Openness Fuels AI Democratization" panel, @percyliang, co-founder of @SalesforceVC portfolio company @togethercompute, shared his thoughts on why open source matters for the future of AI.

One of my favorite takeaways: Today's open-source AI feels like a similar…

Stanford NLP Group أعاد

At the Agentic AI panel at #BayLearn2025, Diyi Yang, Assistant Professor at Stanford, mentioned that while students use ChatGPT for their homework, more than making agents, we should make sure humans develop the skills such as critical thinking to survive.

ZahraSaj's tweet image. At the Agentic AI panel at #BayLearn2025, Diyi Yang, Assistant Professor at Stanford, mentioned that while students use ChatGPT for their homework, more than making agents, we should make sure humans develop the skills such as critical thinking to survive.

Stanford NLP Group أعاد

The #ICCV2025 Artificial Social Intelligence Workshop will be a full-day event on Sunday, 10/19 in Room 317B Join us to discuss social reasoning, multimodality, and embodiment in socially-intelligent AI agents!

lmathur_'s tweet image. The #ICCV2025 Artificial Social Intelligence Workshop will be a full-day event on Sunday, 10/19 in Room 317B

Join us to discuss social reasoning, multimodality, and embodiment in socially-intelligent AI agents!

Excited to announce the Artificial Social Intelligence Workshop @ ICCV 2025 @ICCVConference Join us in October to discuss the science of social intelligence and algorithms to advance socially-intelligent AI! Discussion will focus on reasoning, multimodality, and embodiment.

lmathur_'s tweet image. Excited to announce the Artificial Social Intelligence Workshop @ ICCV 2025 @ICCVConference

Join us in October to discuss the science of social intelligence and algorithms to advance socially-intelligent AI! Discussion will focus on reasoning, multimodality, and embodiment.


Stanford NLP Group أعاد

💥New Paper Diversity is the key to everything: creative tasks and RL exploration. Yet, most LLMs suffered from mode collapse, always repeating the same answers. Our new paper introduces Verbalized Sampling, a general method to bypass this and unlock your model's true…

simon_ycl's tweet image. 💥New Paper 

Diversity is the key to everything: creative tasks and RL exploration. Yet, most LLMs suffered from mode collapse, always repeating the same answers.  

Our new paper introduces Verbalized Sampling, a general method to bypass this and unlock your model's true…

New paper: You can make ChatGPT 2x as creative with one sentence. Ever notice how LLMs all sound the same? They know 100+ jokes but only ever tell one. Every blog intro: "In today's digital landscape..." We figured out why – and how to unlock the rest 🔓 Copy-paste prompt: 🧵



Stanford NLP Group أعاد

[CL] Generation Space Size: Understanding and Calibrating Open-Endedness of LLM Generations S Yu, A Jabbar, R Hawkins, D Jurafsky... [Stanford University] (2025) arxiv.org/abs/2510.12699

fly51fly's tweet image. [CL] Generation Space Size: Understanding and Calibrating Open-Endedness of LLM Generations
S Yu, A Jabbar, R Hawkins, D Jurafsky... [Stanford University] (2025)
arxiv.org/abs/2510.12699
fly51fly's tweet image. [CL] Generation Space Size: Understanding and Calibrating Open-Endedness of LLM Generations
S Yu, A Jabbar, R Hawkins, D Jurafsky... [Stanford University] (2025)
arxiv.org/abs/2510.12699
fly51fly's tweet image. [CL] Generation Space Size: Understanding and Calibrating Open-Endedness of LLM Generations
S Yu, A Jabbar, R Hawkins, D Jurafsky... [Stanford University] (2025)
arxiv.org/abs/2510.12699
fly51fly's tweet image. [CL] Generation Space Size: Understanding and Calibrating Open-Endedness of LLM Generations
S Yu, A Jabbar, R Hawkins, D Jurafsky... [Stanford University] (2025)
arxiv.org/abs/2510.12699

Stanford NLP Group أعاد

Self learning is indeed the future but the interesting part is I literally learn 90% basic knowledge of NLP and LLM by Stanford open courses (the syllabus were continuously updated every semester) I don’t know about Harvard but I just want to make sure are the “professors don’t…

Harvard and Stanford students tell me their professors don't understand AI and the courses are outdated. If elite schools can't keep up, the credential arms race is over. Self-learning is the only way now.



😆

It's true. When I took Stanford's CS224n (Natural Language Processing with Deep Learning) in 2018, they didn't even teach ChatGPT, Prompting, or MCP. How was I supposed to be prepared for the real world?



Stanford NLP Group أعاد

If anyone needs a video guide to Karpathy's nanochat, check out Stanford's CS336! It covers: - Tokenization - Resource Accounting - Pretraining - Finetuning (SFT/RLHF) - Overview of Key Architectures - Working with GPUs - Kernels and Tritons - Parallelism - Scaling Laws -…

akshay_pachaar's tweet image. If anyone needs a video guide to Karpathy's nanochat, check out Stanford's CS336!

It covers:

- Tokenization
- Resource Accounting
- Pretraining
- Finetuning (SFT/RLHF)
- Overview of Key Architectures
- Working with GPUs
- Kernels and Tritons
- Parallelism
- Scaling Laws
-…

Stanford NLP Group أعاد

Verbalized Sampling: Diversity isn't destroyed, just hidden. 📄Paper: arxiv.org/abs/2510.01171 🌐Blog & More: verbalized-sampling.com Team: @JiayiZhang0427 @simon_ycl @dch Anthony Sicilia, Michael Tomz, @chrmanning @shi_weiyan @StanfordNLP × Northeastern × WVU

shi_weiyan's tweet image. Verbalized Sampling: Diversity isn't destroyed, just hidden.  

📄Paper: arxiv.org/abs/2510.01171 
🌐Blog & More: verbalized-sampling.com  

Team: @JiayiZhang0427 @simon_ycl @dch Anthony Sicilia, Michael Tomz, @chrmanning @shi_weiyan @StanfordNLP × Northeastern × WVU

Stanford NLP Group أعاد

New paper: You can make ChatGPT 2x as creative with one sentence. Ever notice how LLMs all sound the same? They know 100+ jokes but only ever tell one. Every blog intro: "In today's digital landscape..." We figured out why – and how to unlock the rest 🔓 Copy-paste prompt: 🧵


Stanford NLP Group أعاد

Accepted papers for the Reliable ML from Unreliable Data workshop @ NeurIPS 2025 are now live on OpenReview! Thrilled to have @tatsu_hashimoto join @abeirami @charapod on our panel!

AnayMehrotra's tweet image. Accepted papers for the Reliable ML from Unreliable Data workshop @ NeurIPS 2025 are now live on OpenReview!

Thrilled to have @tatsu_hashimoto join @abeirami @charapod on our panel!

Stanford NLP Group أعاد

This is all covered in stanford's CS 336, by the way, for anyone needing a guide


Stanford NLP Group أعاد

During her @UN speech, HAI Senior Fellow @YejinChoinka called on the global community to expand the AI frontier for all. Here, she emphasized the need for investing in bold science, building public AI infrastructure, and prioritizing capacity-building: hai.stanford.edu/policy/yejin-c…


Stanford NLP Group أعاد

🤖➡️📉 Post-training made LLMs better at chat and reasoning—but worse at distributional alignment, diversity, and sometimes even steering(!) We measure this with our new resource (Spectrum Suite) and introduce Spectrum Tuning (method) to bring them back into our models! 🌈 1/🧵

ma_tay_'s tweet image. 🤖➡️📉 Post-training made LLMs better at chat and reasoning—but worse at distributional alignment, diversity, and sometimes even steering(!)

We measure this with our new resource (Spectrum Suite) and introduce Spectrum Tuning (method) to bring them back into our models! 🌈

1/🧵

Stanford NLP Group أعاد

Today is my 10 year anniversary of starting AI research. The first thing I worked on was sentiment analysis. Most young AI researchers today never have heard of sentiment analysis. Instead, modern sentiment analysis is studying the sentiment of AI model behavior (e.g. sycophancy)


Stanford NLP Group أعاد

Instruction tuning has a hidden cost: ✅ Better at following instructions ❌ Narrower output distribution ❌ Worse in-context steerability We built 🌈 Spectrum Suite to investigate this and 🌈 Spectrum Tuning as an alternative post-training method —

🤖➡️📉 Post-training made LLMs better at chat and reasoning—but worse at distributional alignment, diversity, and sometimes even steering(!) We measure this with our new resource (Spectrum Suite) and introduce Spectrum Tuning (method) to bring them back into our models! 🌈 1/🧵

ma_tay_'s tweet image. 🤖➡️📉 Post-training made LLMs better at chat and reasoning—but worse at distributional alignment, diversity, and sometimes even steering(!)

We measure this with our new resource (Spectrum Suite) and introduce Spectrum Tuning (method) to bring them back into our models! 🌈

1/🧵


Stanford NLP Group أعاد

Lot of insights in @YejinChoinka's talk on RL training. Rip for next token prediction training (NTP) and welcome to Reinforcement Learning Pretraining (RLP). #COLM2025 No place to even stand in the room.

sivareddyg's tweet image. Lot of insights in @YejinChoinka's talk on RL training. Rip for next token prediction training (NTP) and welcome to Reinforcement Learning Pretraining (RLP). #COLM2025

No place to even stand in the room.

Stanford NLP Group أعاد

I am a linguist who is celebrating in the LLM era, and constantly bragging about my past insights.


Loading...

Something went wrong.


Something went wrong.