mldcmu's profile picture. The top education and research institution in the 🌎 for #AI and #machinelearning | Research 
 → http://blog.ml.cmu.edu | Learn more ↓

Machine Learning Dept. at Carnegie Mellon

@mldcmu

The top education and research institution in the 🌎 for #AI and #machinelearning | Research → http://blog.ml.cmu.edu | Learn more ↓

Machine Learning Dept. at Carnegie Mellon 님이 재게시함

Neat framework to unify and understand consistency-like models by @nmboffi, @msalbergo and Eric Vanden-Eijnden !

Consistency models, CTMs, shortcut models, align your flow, mean flow... What's the connection, and how should you learn them in practice? We show they're all different sides of the same coin connected by one central object: the flow map. arxiv.org/abs/2505.18825 🧵(1/n)



Machine Learning Dept. at Carnegie Mellon 님이 재게시함

A nice application of our NeuroAI Turing Test! Check out @cogphilosopher's thread for more details on comparing brains to machines!

1/x Our new method, the Inter-Animal Transform Class (IATC), is a principled way to compare neural network models to the brain. It's the first to ensure both accurate brain activity predictions and specific identification of neural mechanisms. Preprint: arxiv.org/abs/2510.02523

cogphilosopher's tweet image. 1/x Our new method, the Inter-Animal Transform Class (IATC), is a principled way to compare neural network models to the brain. It's the first to ensure both accurate brain activity predictions and specific identification of neural mechanisms.
 
Preprint: arxiv.org/abs/2510.02523


Machine Learning Dept. at Carnegie Mellon 님이 재게시함

A closed door looks the same whether it pushes or pulls. Two identical-looking boxes might have different center of mass. How should robots act when a single visual observation isn't enough? Introducing HAVE 🤖, our method that reasons about past interactions online! #CORL2025

LisaYishu's tweet image. A closed door looks the same whether it pushes or pulls. Two identical-looking boxes might have different center of mass. How should robots act when a single visual observation isn't enough?

Introducing HAVE 🤖, our method that reasons about past interactions online! #CORL2025

Machine Learning Dept. at Carnegie Mellon 님이 재게시함

Recent discussions (e.g. @RichardSSutton on @dwarkesh_sp’s podcast) have highlighted why animals are a better target for intelligence — and why scaling alone isn’t enough. In my recent @CMU_Robotics seminar talk, “Using Embodied Agents to Reverse-Engineer Natural Intelligence”,…

aran_nayebi's tweet image. Recent discussions (e.g. @RichardSSutton on @dwarkesh_sp’s podcast) have highlighted why animals are a better target for intelligence — and why scaling alone isn’t enough. 
In my recent @CMU_Robotics seminar talk, “Using Embodied Agents to Reverse-Engineer Natural Intelligence”,…

Machine Learning Dept. at Carnegie Mellon 님이 재게시함

CMU hacking team the Plaid Parliament of Pwning pwns again, wins fourth straight and record ninth overall DEF CON Capture-the-Flag title cylab.cmu.edu/news/2025/08/1…


Machine Learning Dept. at Carnegie Mellon 님이 재게시함

Thank you @google for the ML and Systems Junior Faculty Award! This award is for work on sparsity, and I am excited to continue this work focusing on mixture of experts. We might bring big MoEs to small GPUs quite soon! Stay tuned! Read more here: cs.cmu.edu/news/2025/dett…


Machine Learning Dept. at Carnegie Mellon 님이 재게시함

📢Introducing the Alignment Project: A new fund for research on urgent challenges in AI alignment and control, backed by over £15 million. ▶️ Up to £1 million per project ▶️ Compute access, venture capital investment, and expert support Learn more and apply ⬇️


Machine Learning Dept. at Carnegie Mellon 님이 재게시함

blog.ml.cmu.edu/2025/07/08/car… Check out our latest post on CMU @ ICML 2025!


Machine Learning Dept. at Carnegie Mellon 님이 재게시함

.@BNYglobal’s Leigh-Ann Russell and @CarnegieMellon’s Zico Kolter took the #realinsite mainstage for an exciting conversation on the future of #AI in wealth management. The key to successful integration? It's all about systems that 🤖 react, 🧠 think and 🔗 interact.

Pershing's tweet image. .@BNYglobal’s Leigh-Ann Russell and @CarnegieMellon’s Zico Kolter took the #realinsite mainstage for an exciting conversation on the future of #AI in wealth management. The key to successful integration? It's all about systems that 🤖 react, 🧠 think and 🔗 interact.

Machine Learning Dept. at Carnegie Mellon 님이 재게시함

1/6 🚀 Excited to share that BrainNRDS has been accepted as an oral at #CVPR2025! We decode motion from fMRI activity and use it to generate realistic reconstructions of videos people watched, outperforming strong existing baselines like MindVideo and Stable Video Diffusion.🧠🎥


Machine Learning Dept. at Carnegie Mellon 님이 재게시함

Our first NeuroAgent! 🐟🧠 Excited to share new work led by the talented @rdkeller, showing how autonomous behavior and whole-brain dynamics emerge naturally from intrinsic curiosity grounded in world models and memory. Some highlights: - Developed a novel intrinsic drive…

1/ I'm excited to share recent results from my first collaboration with the amazing @aran_nayebi and @Leokoz8! We show how autonomous behavior and whole-brain dynamics emerge in embodied agents with intrinsic motivation driven by world models.



Machine Learning Dept. at Carnegie Mellon 님이 재게시함

Virginia Smith, the Leonardo Associate Professor of Machine Learning, has received the Air Force Office of Scientific Research 2025 Young Investigator award. cs.cmu.edu/news/2025/smit…


Machine Learning Dept. at Carnegie Mellon 님이 재게시함

Check out our new work exploring how to make robots sense touch more like our brains! Surprisingly, ConvRNNs aligned best with mouse somatosensory cortex and even passed the NeuroAI Turing Test on current neural data. We also developed new tactile-specific augmentations for…

1/ What if we make robots that process touch the way our brains do? We found that Convolutional Recurrent Neural Networks (ConvRNNs) pass the NeuroAI Turing Test in currently available mouse somatosensory cortex data. New paper by @AlexShenSyc @NathanKong @aran_nayebi and me!

Milotrince's tweet image. 1/ What if we make robots that process touch the way our brains do?
We found that Convolutional Recurrent Neural Networks (ConvRNNs) pass the NeuroAI Turing Test in currently available mouse somatosensory cortex data.
New paper by @AlexShenSyc @NathanKong @aran_nayebi and me!


Machine Learning Dept. at Carnegie Mellon 님이 재게시함

Really thrilled to receive #NVIDIADGX B200 from @nvidia . Looking forward to cooking with the beast. Together with an amazing team at CMU Catalyst group @BeidiChen @Tim_Dettmers @JiaZhihao @zicokolter, We are looking at the innovate across entire stack from model to instructions

Huge thank you to @NVIDIADC for gifting a brand new #NVIDIADGX B200 to CMU’s Catalyst Research Group! This AI supercomputing system will afford Catalyst the ability to run and test their work on a world-class unified AI platform.

SCSatCMU's tweet image. Huge thank you to @NVIDIADC for gifting a brand new #NVIDIADGX B200 to CMU’s Catalyst Research Group! This AI supercomputing system will afford Catalyst the ability to run and test their work on a world-class unified AI platform.
SCSatCMU's tweet image. Huge thank you to @NVIDIADC for gifting a brand new #NVIDIADGX B200 to CMU’s Catalyst Research Group! This AI supercomputing system will afford Catalyst the ability to run and test their work on a world-class unified AI platform.
SCSatCMU's tweet image. Huge thank you to @NVIDIADC for gifting a brand new #NVIDIADGX B200 to CMU’s Catalyst Research Group! This AI supercomputing system will afford Catalyst the ability to run and test their work on a world-class unified AI platform.
SCSatCMU's tweet image. Huge thank you to @NVIDIADC for gifting a brand new #NVIDIADGX B200 to CMU’s Catalyst Research Group! This AI supercomputing system will afford Catalyst the ability to run and test their work on a world-class unified AI platform.


Machine Learning Dept. at Carnegie Mellon 님이 재게시함

Thanks @NVIDIADC for the DGX B200 machine for the CMU Catalyst group! I'm perhaps already a bit too enthralled by it in the photos...

Huge thank you to @NVIDIADC for gifting a brand new #NVIDIADGX B200 to CMU’s Catalyst Research Group! This AI supercomputing system will afford Catalyst the ability to run and test their work on a world-class unified AI platform.

SCSatCMU's tweet image. Huge thank you to @NVIDIADC for gifting a brand new #NVIDIADGX B200 to CMU’s Catalyst Research Group! This AI supercomputing system will afford Catalyst the ability to run and test their work on a world-class unified AI platform.
SCSatCMU's tweet image. Huge thank you to @NVIDIADC for gifting a brand new #NVIDIADGX B200 to CMU’s Catalyst Research Group! This AI supercomputing system will afford Catalyst the ability to run and test their work on a world-class unified AI platform.
SCSatCMU's tweet image. Huge thank you to @NVIDIADC for gifting a brand new #NVIDIADGX B200 to CMU’s Catalyst Research Group! This AI supercomputing system will afford Catalyst the ability to run and test their work on a world-class unified AI platform.
SCSatCMU's tweet image. Huge thank you to @NVIDIADC for gifting a brand new #NVIDIADGX B200 to CMU’s Catalyst Research Group! This AI supercomputing system will afford Catalyst the ability to run and test their work on a world-class unified AI platform.


Excited to see what the MLD Faculty and Students in the Catalyst Research Group will do with this brand new #NVIDIADGX B200. Many thanks to @NVIDIADC from all of us at #CMU! @zicokolter @tqchenml

Huge thank you to @NVIDIADC for gifting a brand new #NVIDIADGX B200 to CMU’s Catalyst Research Group! This AI supercomputing system will afford Catalyst the ability to run and test their work on a world-class unified AI platform.

SCSatCMU's tweet image. Huge thank you to @NVIDIADC for gifting a brand new #NVIDIADGX B200 to CMU’s Catalyst Research Group! This AI supercomputing system will afford Catalyst the ability to run and test their work on a world-class unified AI platform.
SCSatCMU's tweet image. Huge thank you to @NVIDIADC for gifting a brand new #NVIDIADGX B200 to CMU’s Catalyst Research Group! This AI supercomputing system will afford Catalyst the ability to run and test their work on a world-class unified AI platform.
SCSatCMU's tweet image. Huge thank you to @NVIDIADC for gifting a brand new #NVIDIADGX B200 to CMU’s Catalyst Research Group! This AI supercomputing system will afford Catalyst the ability to run and test their work on a world-class unified AI platform.
SCSatCMU's tweet image. Huge thank you to @NVIDIADC for gifting a brand new #NVIDIADGX B200 to CMU’s Catalyst Research Group! This AI supercomputing system will afford Catalyst the ability to run and test their work on a world-class unified AI platform.


Machine Learning Dept. at Carnegie Mellon 님이 재게시함

As a long-time fan of @pgmid's "Brain Inspired" podcast, it was an honor to be invited on to talk about NeuroAgents, our update to the Turing Test, and AI safety at the end. Coincidentally recorded on my birthday, no less! Check it out here 👇

In this “Brain Inspired” episode, @aran_nayebi joins @pgmid to discuss his reverse-engineering approach to build autonomous artificial-intelligence agents. thetransmitter.org/brain-inspired…



Machine Learning Dept. at Carnegie Mellon 님이 재게시함

1/ 🧵👇 What should count as a good model of intelligence? AI is advancing rapidly, but how do we know if it captures intelligence in a scientifically meaningful way? We propose the *NeuroAI Turing Test*—a benchmark that evaluates models based on both behavior and internal…

aran_nayebi's tweet image. 1/ 🧵👇
What should count as a good model of intelligence?

AI is advancing rapidly, but how do we know if it captures intelligence in a scientifically meaningful way?

We propose the *NeuroAI Turing Test*—a benchmark that evaluates models based on both behavior and internal…

Machine Learning Dept. at Carnegie Mellon 님이 재게시함

Introducing *ARC‑AGI Without Pretraining* – ❌ No pretraining. ❌ No datasets. Just pure inference-time gradient descent on the target ARC-AGI puzzle itself, solving 20% of the evaluation set. 🧵 1/4

LiaoIsaac91893's tweet image. Introducing *ARC‑AGI Without Pretraining* – ❌ No pretraining. ❌ No datasets. Just pure inference-time gradient descent on the target ARC-AGI puzzle itself, solving 20% of the evaluation set. 🧵 1/4

Machine Learning Dept. at Carnegie Mellon 님이 재게시함

I’m happy to share that my rotation project was accepted by Journal of Computational Neuroscience. Thanks, my rotation advisor Dr. Bard Ermentrout and my collaborator Dr. Daniel Chung. link.springer.com/article/10.100…


Loading...

Something went wrong.


Something went wrong.