Stanford NLP Group
@stanfordnlp
Computational Linguists—Natural Language—Machine Learning @chrmanning @jurafsky @percyliang @ChrisGPotts @tatsu_hashimoto @MonicaSLam @Diyi_Yang @StanfordAILab
Bạn có thể thích
heading to neurips, will be at posters for - RePS, a SoTA steering method (arxiv.org/abs/2505.20809) - How LM encodes harmfulness and refusal (arxiv.org/abs/2507.11878) would be great to chat about update priors (+jobs!) on LM steering, pretraining auditing, and circuit tracing.
🎀 fine-grained, interpretable representation steering for LMs! meet RePS — Reference-free Preference Steering! 1⃣ outperforms existing methods on 2B-27B LMs, nearly matching prompting 2⃣ supports both steering and suppression (beat system prompts!) 3⃣ jailbreak-proof (1/n)
Our panel for the “Reliable ML from Unreliable Data” workshop is now set 🎙️ Very excited to have @abeirami, @ParikshitGopal1, @tatsu_hashimoto, and @charapod join us on Saturday, December 6th!
This post seems to describe substantially the same view that I offer here: web.stanford.edu/~cgpotts/blog/… Why are people describing the GDM post as concluding that mech-interp is a failed project? Is it the renaming of the field and constant talk of "pivoting"?
The GDM mechanistic interpretability team has pivoted to a new approach: pragmatic interpretability Our post details how we now do research, why now is the time to pivot, why we expect this way to have more impact and why we think other interp researchers should follow suit
Also, big congratulations to @YejinChoinka on a NeurIPS 2025 Best Paper Award! (Especially clever making the paper the alphabetically first title among the awarded papers!) blog.neurips.cc/2025/11/26/ann…
ImpactRank says we’re #1 🥇 in #NLProc — so we think their methodology is sound! 😆 impactrank.org
CSRankings counts publication in top conferences to rank professors/universities. But this encourages researchers to pursue quantity rather than quality. We propose impactrank.org, a new university ranking system that tries to measure quality instead of quantity of…
mech interp is surely a field in Kuhnian crisis alignmentforum.org/posts/StENzDcD…
¿Cómo manejan realmente los docentes el aula? Un nuevo estudio de Stanford analiza 1.652 transcripciones de clases usando IA y NLP para medir cómo los profesores usan el lenguaje para gestionar comportamientos y mantener el orden. Un avance enorme para observar estas prácticas a…
Inspiring Talk from @Diyi_Yang on the importance of developing foundation models to augment humans. Join us at Room Don Alberto 1!
Structured prompting is the easiest way to boost LM performance across benchmarks: +4–5% accuracy on average -up to +10–12% on reasoning tasks ~90% of gains come just from adding CoT and it cuts variance by ~2–4% so more stable outputs @DSPyOSS, once again.
Introducing the Artificial Analysis Openness Index: a standardized and independently assessed measure of AI model openness across availability and transparency Openness is not just the ability to download model weights. It is also licensing, data and methodology - we developed a…
🚀DeepSeek V3.2 officially utilized our corrected KL regularization term in their training objective! On the Design of KL-Regularized Policy Gradient Algorithms for LLM Reasoning (arxiv.org/abs/2505.17508) See also tinker-docs.thinkingmachines.ai/losses It will be even better if they can…
🚀 Launching DeepSeek-V3.2 & DeepSeek-V3.2-Speciale — Reasoning-first models built for agents! 🔹 DeepSeek-V3.2: Official successor to V3.2-Exp. Now live on App, Web & API. 🔹 DeepSeek-V3.2-Speciale: Pushing the boundaries of reasoning capabilities. API-only for now. 📄 Tech…
🚀 Launching DeepSeek-V3.2 & DeepSeek-V3.2-Speciale — Reasoning-first models built for agents! 🔹 DeepSeek-V3.2: Official successor to V3.2-Exp. Now live on App, Web & API. 🔹 DeepSeek-V3.2-Speciale: Pushing the boundaries of reasoning capabilities. API-only for now. 📄 Tech…
Day 10 of becoming an LLM Engineer 🚀 Finished the CS224N lecture on Pretraining: 👉 subword tokenization (BPE) 👉 masked LM (BERT) 👉 span corruption (T5) 👉 pretraining → fine-tuning intuition 👉 decoder-only LM (GPT) 👉 in-context learning + chain-of-thought #AI #LLM #NLP
My heart goes out to @iclr_conf organizers who are putting up a valiant fight to restore review integrity in the face of the @openreview leak. Organizing conferences is always a labor of love, and doing so for mega AI conferences in the midst of a massive security leak is…
United States Xu hướng
- 1. Penn State 20K posts
- 2. #twitchrecap 13K posts
- 3. #TADCFriend 1,755 posts
- 4. Paul Dano 1,415 posts
- 5. Slay 20.4K posts
- 6. Pat Kraft 2,364 posts
- 7. Romero 23.6K posts
- 8. Zion 10.1K posts
- 9. Tarantino 8,262 posts
- 10. #LightningStrikes N/A
- 11. #GivingTuesday 32.3K posts
- 12. Bernie 25.7K posts
- 13. Somali 175K posts
- 14. Fulham 46.2K posts
- 15. Sleepy Don 6,408 posts
- 16. Franklin 68.9K posts
- 17. Larry 64.2K posts
- 18. Sabrina Carpenter 44.6K posts
- 19. Cody Ponce N/A
- 20. Trump Accounts 31.5K posts
Bạn có thể thích
-
Andrew Ng
@AndrewYNg -
Hugging Face
@huggingface -
Christopher Manning
@chrmanning -
Andrej Karpathy
@karpathy -
Ian Goodfellow
@goodfellow_ian -
Stanford AI Lab
@StanfordAILab -
Berkeley AI Research
@berkeley_ai -
PyTorch
@PyTorch -
Yann LeCun
@ylecun -
Soumith Chintala
@soumithchintala -
Jeff Dean
@JeffDean -
Sebastian Ruder
@seb_ruder -
Chris Olah
@ch402
Something went wrong.
Something went wrong.