
Neil Chowdhury
@ChowdhuryNeil
@TransluceAI, previously @OpenAI
내가 좋아할 만한 콘텐츠
Ever wondered how likely your AI model is to misbehave? We developed the *propensity lower bound* (PRBO), a variational lower bound on the probability of a model exhibiting a target (misaligned) behavior.
Is cutting off your finger a good way to fix writer’s block? Qwen-2.5 14B seems to think so! 🩸🩸🩸 We’re sharing an update on our investigator agents, which surface this pathological behavior and more using our new *propensity lower bound* 🔎

Claude Sonnet 4.5 behaves the most desirably across Petri evals, but is 2-10x more likely to express awareness it's being evaluated than competitive peers. This affects how much we can conclude about how "aligned" models are from these evals. Improving realism seems essential.

Last week we released Claude Sonnet 4.5. As part of our alignment testing, we used a new tool to run automated audits for behaviors like sycophancy and deception. Now we’re open-sourcing the tool to run those audits.

On our evals for HAL, we found that agents figure out they're being evaluated even on capability evals. For example, here Claude 3.7 Sonnet *looks up the benchmark on HuggingFace* to find the answer to an AssistantBench question. There were many such cases across benchmarks and…

To make a model that *doesn't* instantly learn to distinguish between "fake-ass alignment test" and "normal task." ...seems like the first thing to do seems like it would be "make all alignment evals very small variations on actual capability evals." Do people do this?
AI is very quickly becoming a foundational and unavoidable piece of daily life. the dam has burst. the question we must ask and answer is which ways do we want the waves to flow. i would like to live in a world where we all understand this technology enough to be able to…
Docent has been really useful for understanding the outputs of my RL training runs -- glad it's finally open-source!
We’re open-sourcing Docent under an Apache 2.0 license. Check out our public codebase to self-host Docent, peek under the hood, or open issues & pull requests! The hosted version remains the easiest way to get started with one click and use Docent with zero maintenance overhead.
METR is a non-profit research organization, and we are actively fundraising! We prioritise independence and trustworthiness, which shapes both our research process and our funding options. To date, we have not accepted funding from frontier AI labs.
they parted disclaim marinade they parted illusions
Stop by on Thursday if you're at MIT 🙂
Later this week, we're giving a talk about our research at MIT! Understanding AI Systems at Scale: Applied Interpretability, Agent Robustness, and the Science of Model Behaviors @ChowdhuryNeil and @vvhuang_ Where: 4-370 When: Thursday 9/18, 6:00pm Details below 👇
Agent benchmarks lose *most* of their resolution because we throw out the logs and only look at accuracy. I’m very excited that HAL is incorporating @TransluceAI’s Docent to analyze agent logs in depth. Peter’s thread is a simple example of the type of analysis this enables,…
OpenAI claims hallucinations persist because evaluations reward guessing and that GPT-5 is better calibrated. Do results from HAL support this conclusion? On AssistantBench, a general web search benchmark, GPT-5 has higher precision and lower guess rates than o3!

Very cool work -- points toward AI being one of the rare cases in tech where governments can be on the cutting edge
Excited to share details on two of our longest running and most effective safeguard collaborations, one with Anthropic and one with OpenAI. We've identified—and they've patched—a large number of vulnerabilities and together strengthened their safeguards. 🧵 1/6


Looks like somebody added safeguards for best-of-N jailbreaking

Very happy to see this! I hope other AI developers follow (Anthropic created a collective constitution a couple years ago, perhaps it needs updating), and that we as a community develop better rubrics & measurement tools for model behavior :)
No single person or institution should define ideal AI behavior for everyone. Today, we’re sharing early results from collective alignment, a research effort where we asked the public about how models should behave by default. Blog here: openai.com/index/collecti…
Docent, our tool for analyzing complex AI behaviors, is now in public alpha! It helps scalably answer questions about agent behavior, like “is my model reward hacking” or “where does it violate instructions.” Today, anyone can get started with just a few lines of code!

United States 트렌드
- 1. Baker 27.5K posts
- 2. Cowboys 71.8K posts
- 3. Fred Warner 10K posts
- 4. Panthers 72.8K posts
- 5. Packers 26.6K posts
- 6. Tez Johnson 2,497 posts
- 7. Zac Taylor 2,747 posts
- 8. Niners 4,660 posts
- 9. Browns 64K posts
- 10. Titans 22.1K posts
- 11. #FTTB 3,794 posts
- 12. Yoshi 32.8K posts
- 13. Ravens 63.9K posts
- 14. Dolphins 46.4K posts
- 15. Cam Ward 2,144 posts
- 16. #49ers 5,991 posts
- 17. #KeepPounding 8,174 posts
- 18. Eberflus 9,935 posts
- 19. #Bengals 2,735 posts
- 20. Penn State 63.5K posts
내가 좋아할 만한 콘텐츠
-
Juhyun Kim
@juhyunk_ -
qicy
@qicy11 -
Zihan Xu
@xu_zxu -
Sajad Razavi Bazaz
@Sajad_Rzv_Bazaz -
Zeribe Nwosu
@zeribechike -
Xiaotao Wang
@XiaotaoWang3 -
elissa
@wavylinesem -
Sean Corcoran, MD PhD
@S_Corcoran -
José Luis Ruiz
@pepeluisrr -
Ronald Chandler
@Chandler_Lab -
Angel Lizandro Polanco
@TheOrganoidBoy -
Yu (Sunny) Liu
@YuLiu_Sunny -
Charles Breeze
@charles_breeze -
ZheFrench
@ZheFrench -
James Ward
@jmw86069
Something went wrong.
Something went wrong.