stanfordnlp's profile picture. Computational Linguists—Natural Language—Machine Learning @chrmanning @jurafsky @percyliang @ChrisGPotts @tatsu_hashimoto @MonicaSLam @Diyi_Yang @StanfordAILab

Stanford NLP Group

@stanfordnlp

Computational Linguists—Natural Language—Machine Learning @chrmanning @jurafsky @percyliang @ChrisGPotts @tatsu_hashimoto @MonicaSLam @Diyi_Yang @StanfordAILab

Stanford NLP Group đã đăng lại

heading to neurips, will be at posters for - RePS, a SoTA steering method (arxiv.org/abs/2505.20809) - How LM encodes harmfulness and refusal (arxiv.org/abs/2507.11878) would be great to chat about update priors (+jobs!) on LM steering, pretraining auditing, and circuit tracing.

🎀 fine-grained, interpretable representation steering for LMs! meet RePS — Reference-free Preference Steering! 1⃣ outperforms existing methods on 2B-27B LMs, nearly matching prompting 2⃣ supports both steering and suppression (beat system prompts!) 3⃣ jailbreak-proof (1/n)

qinan_yu's tweet image. 🎀 fine-grained, interpretable representation steering for LMs!
meet RePS — Reference-free Preference Steering!

1⃣ outperforms existing methods on 2B-27B LMs, nearly matching prompting
2⃣ supports both steering and suppression (beat system prompts!)
3⃣ jailbreak-proof

(1/n)


Stanford NLP Group đã đăng lại

Our panel for the “Reliable ML from Unreliable Data” workshop is now set 🎙️ Very excited to have @abeirami, @ParikshitGopal1, @tatsu_hashimoto, and @charapod join us on Saturday, December 6th!

AnayMehrotra's tweet image. Our panel for the “Reliable ML from Unreliable Data” workshop is now set 🎙️

Very excited to have @abeirami, @ParikshitGopal1, @tatsu_hashimoto, and @charapod join us on Saturday, December 6th!

Stanford NLP Group đã đăng lại

This post seems to describe substantially the same view that I offer here: web.stanford.edu/~cgpotts/blog/… Why are people describing the GDM post as concluding that mech-interp is a failed project? Is it the renaming of the field and constant talk of "pivoting"?

The GDM mechanistic interpretability team has pivoted to a new approach: pragmatic interpretability Our post details how we now do research, why now is the time to pivot, why we expect this way to have more impact and why we think other interp researchers should follow suit

NeelNanda5's tweet image. The GDM mechanistic interpretability team has pivoted to a new approach: pragmatic interpretability

Our post details how we now do research, why now is the time to pivot, why we expect this way to have more impact and why we think other interp researchers should follow suit


Also, big congratulations to @YejinChoinka on a NeurIPS 2025 Best Paper Award! (Especially clever making the paper the alphabetically first title among the awarded papers!) blog.neurips.cc/2025/11/26/ann…


ImpactRank says we’re #1 🥇 in #NLProc — so we think their methodology is sound! 😆 impactrank.org

CSRankings counts publication in top conferences to rank professors/universities. But this encourages researchers to pursue quantity rather than quality. We propose impactrank.org, a new university ranking system that tries to measure quality instead of quantity of…

ai_impact_rank's tweet image. CSRankings counts publication in top conferences to rank professors/universities. But this encourages  researchers to pursue quantity rather than quality.

We propose impactrank.org, a new university ranking system that tries to measure quality instead of quantity of…
ai_impact_rank's tweet image. CSRankings counts publication in top conferences to rank professors/universities. But this encourages  researchers to pursue quantity rather than quality.

We propose impactrank.org, a new university ranking system that tries to measure quality instead of quantity of…
ai_impact_rank's tweet image. CSRankings counts publication in top conferences to rank professors/universities. But this encourages  researchers to pursue quantity rather than quality.

We propose impactrank.org, a new university ranking system that tries to measure quality instead of quantity of…
ai_impact_rank's tweet image. CSRankings counts publication in top conferences to rank professors/universities. But this encourages  researchers to pursue quantity rather than quality.

We propose impactrank.org, a new university ranking system that tries to measure quality instead of quantity of…


Stanford NLP Group đã đăng lại

mech interp is surely a field in Kuhnian crisis alignmentforum.org/posts/StENzDcD…


Stanford NLP Group đã đăng lại

¿Cómo manejan realmente los docentes el aula? Un nuevo estudio de Stanford analiza 1.652 transcripciones de clases usando IA y NLP para medir cómo los profesores usan el lenguaje para gestionar comportamientos y mantener el orden. Un avance enorme para observar estas prácticas a…

sanz_ismael's tweet image. ¿Cómo manejan realmente los docentes el aula? Un nuevo estudio de Stanford analiza 1.652 transcripciones de clases usando IA y NLP para medir cómo los profesores usan el lenguaje para gestionar comportamientos y mantener el orden. Un avance enorme para observar estas prácticas a…

Stanford NLP Group đã đăng lại

Inspiring Talk from @Diyi_Yang on the importance of developing foundation models to augment humans. Join us at Room Don Alberto 1!

CanyuChen3's tweet image. Inspiring Talk from @Diyi_Yang on the importance of developing foundation models to augment humans. Join us at Room Don Alberto 1!
CanyuChen3's tweet image. Inspiring Talk from @Diyi_Yang on the importance of developing foundation models to augment humans. Join us at Room Don Alberto 1!
CanyuChen3's tweet image. Inspiring Talk from @Diyi_Yang on the importance of developing foundation models to augment humans. Join us at Room Don Alberto 1!

Stanford NLP Group đã đăng lại

Structured prompting is the easiest way to boost LM performance across benchmarks: +4–5% accuracy on average -up to +10–12% on reasoning tasks ~90% of gains come just from adding CoT and it cuts variance by ~2–4% so more stable outputs @DSPyOSS, once again.

0xtotem's tweet image. Structured prompting is the easiest way to boost LM performance across benchmarks:

+4–5% accuracy on average -up to +10–12% on reasoning tasks
~90% of gains come just from adding CoT
and it cuts variance by ~2–4% so more stable outputs

@DSPyOSS, once again.

Stanford NLP Group đã đăng lại

Introducing the Artificial Analysis Openness Index: a standardized and independently assessed measure of AI model openness across availability and transparency Openness is not just the ability to download model weights. It is also licensing, data and methodology - we developed a…

ArtificialAnlys's tweet image. Introducing the Artificial Analysis Openness Index: a standardized and independently assessed measure of AI model openness across availability and transparency

Openness is not just the ability to download model weights. It is also licensing, data and methodology - we developed a…

Stanford NLP Group đã đăng lại

🚀DeepSeek V3.2 officially utilized our corrected KL regularization term in their training objective! On the Design of KL-Regularized Policy Gradient Algorithms for LLM Reasoning (arxiv.org/abs/2505.17508) See also tinker-docs.thinkingmachines.ai/losses It will be even better if they can…

yifan_zhang_'s tweet image. 🚀DeepSeek V3.2 officially utilized our corrected KL regularization term in their training objective!

On the Design of KL-Regularized Policy Gradient Algorithms for LLM Reasoning (arxiv.org/abs/2505.17508)

See also tinker-docs.thinkingmachines.ai/losses

It will be even better if they can…

🚀 Launching DeepSeek-V3.2 & DeepSeek-V3.2-Speciale — Reasoning-first models built for agents! 🔹 DeepSeek-V3.2: Official successor to V3.2-Exp. Now live on App, Web & API. 🔹 DeepSeek-V3.2-Speciale: Pushing the boundaries of reasoning capabilities. API-only for now. 📄 Tech…

deepseek_ai's tweet image. 🚀 Launching DeepSeek-V3.2 & DeepSeek-V3.2-Speciale — Reasoning-first models built for agents!

🔹 DeepSeek-V3.2: Official successor to V3.2-Exp. Now live on App, Web & API.
🔹 DeepSeek-V3.2-Speciale: Pushing the boundaries of reasoning capabilities. API-only for now.

📄 Tech…


Stanford NLP Group đã đăng lại

🚀 Launching DeepSeek-V3.2 & DeepSeek-V3.2-Speciale — Reasoning-first models built for agents! 🔹 DeepSeek-V3.2: Official successor to V3.2-Exp. Now live on App, Web & API. 🔹 DeepSeek-V3.2-Speciale: Pushing the boundaries of reasoning capabilities. API-only for now. 📄 Tech…

deepseek_ai's tweet image. 🚀 Launching DeepSeek-V3.2 & DeepSeek-V3.2-Speciale — Reasoning-first models built for agents!

🔹 DeepSeek-V3.2: Official successor to V3.2-Exp. Now live on App, Web & API.
🔹 DeepSeek-V3.2-Speciale: Pushing the boundaries of reasoning capabilities. API-only for now.

📄 Tech…

Stanford NLP Group đã đăng lại

Day 10 of becoming an LLM Engineer 🚀 Finished the CS224N lecture on Pretraining: 👉 subword tokenization (BPE) 👉 masked LM (BERT) 👉 span corruption (T5) 👉 pretraining → fine-tuning intuition 👉 decoder-only LM (GPT) 👉 in-context learning + chain-of-thought #AI #LLM #NLP


Stanford NLP Group đã đăng lại

My heart goes out to @iclr_conf organizers who are putting up a valiant fight to restore review integrity in the face of the @openreview leak. Organizing conferences is always a labor of love, and doing so for mega AI conferences in the midst of a massive security leak is…

rao2z's tweet image. My heart goes out to @iclr_conf organizers who are putting up a valiant fight to restore review integrity in the face of the @openreview leak. 

Organizing conferences is always a labor of love, and doing so for mega AI conferences in the midst of a massive security leak is…

Loading...

Something went wrong.


Something went wrong.