Nanda H Krishna
@nandahkrishna
PhD student at @Mila_Quebec & @UMontreal, @MacHomebrew maintainer.
You might like
New preprint! 🧠🤖 How do we build neural decoders that are: ⚡️ fast enough for real-time use 🎯 accurate across diverse tasks 🌍 generalizable to new sessions, subjects, and species? We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes! 🧵1/7
[1/9] While pretraining data might be hitting a wall, novel methods for modeling it are just getting started! We introduce future summary prediction (FSP), where the model predicts future sequence embeddings to reduce teacher forcing & shortcut learning. 📌Predict a learned…
Mila's annual supervision request process is now open to receive MSc and PhD applications for Fall 2026 admission! For more information, visit mila.quebec/en/prospective…
Qwen3-4B can match DeepSeek-R1 and o3-mini (high) with ONLY test-time scaling?🤯 Introducing Recursive Self-Aggregation (RSA), a new test-time scaling method: - parallel + sequential✅ - no verifiers✅ - no scaffolding✅ Then we use aggregation-aware RL to push further!🚀 🧵👇
Two exciting updates 🚀 1️⃣ POSSM has been accepted to NeurIPS 2025! We'll see you in San Diego 🏖️! 2️⃣ I've officially started my PhD! Very grateful to stay at Mila, and excited to continue working on advancing both deep learning + science! 🧪🧬🧠
Super stoked to share my first first-author paper that introduces a hybrid architecture approach for real-time neural decoding. It's been a lot of work, but happy to showcase some very cool results!
🚨Reasoning LLMs are e̵f̵f̵e̵c̵t̵i̵v̵e̵ ̵y̵e̵t̵ inefficient! Large language models (LLMs) now solve multi-step problems by emitting extended chains of thought. During the process, they often re-derive the same intermediate steps across problems, inflating token usage and…
🚨 The call for demos is still open, the deadline is tomorrow! If you have a tool for visualizing large-scale data, pipelines for training foundation models, or BCI demos, we want to see it! Submission is only 500 words, and it's a great opportunity to showcase your work.
Excited to announce the Foundation Models for the Brain and Body workshop at #NeurIPS2025!🧠 We invite short papers or interactive demos on AI for neural, physiological or behavioral data. Submit by Aug 22 👉 brainbodyfm-workshop.github.io
🚨 We are extending the paper submission deadline to Friday, August 29, 11:59 pm AoE. Check our website for the latest updates on the Foundation Models for the Brain and Body workshop #NeurIPS2025 #BrainBodyFM
Excited to announce the Foundation Models for the Brain and Body workshop at #NeurIPS2025!🧠 We invite short papers or interactive demos on AI for neural, physiological or behavioral data. Submit by Aug 22 👉 brainbodyfm-workshop.github.io
Excited to be organising the BrainBodyFM Workshop – in spirit, a successor to our #COSYNE Workshop on Neuro-foundation Models – at #NeurIPS2025! Check out the website for more details. 🧠🤖
Excited to announce the Foundation Models for the Brain and Body workshop at #NeurIPS2025!🧠 We invite short papers or interactive demos on AI for neural, physiological or behavioral data. Submit by Aug 22 👉 brainbodyfm-workshop.github.io
How to align your diffusion model with unseen objectives at inference time? Presenting Diffusion Tree Sampling/Search (DTS/DTS*) 🥳 Using MCTS-style search, DTS steadily improves sample quality with compute, matching the best baseline with 5× less compute!
Preprint Alert 🚀 Multi-agent reinforcement learning (MARL) often assumes that agents know when other agents cooperate with them. But for humans, this isn’t always true. Example, plains indigenous groups used to leave resources for others to use at effigies called Manitokan. 1/8
The recordings from the 🌐🧠 Neuro Foundation Model workshop are up on the workshop website! Thanks again to our speakers, and everyone who attended. And thanks to the entire team @cole_hurwitz, @nandahkrishna, @averyryoo, @evadyer and @tyrell_turing for making this happen 🙌
How can large-scale models + datasets revolutionize neuroscience 🧠🤖🌐? We are excited to announce our workshop: “Building a foundation model for the brain: datasets, theory, and models” at @CosyneMeeting #COSYNE2025. Join us in Mont-Tremblant, Canada from March 31 - April 1.
Really enjoyed TAing for this!
#COSYNE2025 tutorial by Eva Dyer. Foundations of Transformers in Neuroscience youtu.be/CqS_sIrMZ2A?si… Materials: cosyne-tutorial-2025.github.io
youtube.com
YouTube
Cosyne 2025 Tutorial - Eva Dyer - Foundations of Transformers in...
Just a couple days until Cosyne - stop by [3-083] this Saturday and say hi! @nandahkrishna @XimengMao
United States Trends
- 1. Seahawks 24.8K posts
- 2. Rams 18.3K posts
- 3. Giants 69.4K posts
- 4. Bills 140K posts
- 5. Bears 61.9K posts
- 6. Daboll 13.3K posts
- 7. 49ers 16.4K posts
- 8. Jags 7,203 posts
- 9. Dart 27.3K posts
- 10. Lions 54.1K posts
- 11. Caleb 50.8K posts
- 12. Texans 39.2K posts
- 13. Dolphins 34.3K posts
- 14. Josh Allen 17.1K posts
- 15. Commanders 40.7K posts
- 16. #OnePride 3,207 posts
- 17. Browns 39.5K posts
- 18. Niners 3,774 posts
- 19. Russell Wilson 4,175 posts
- 20. #RaiseHail 2,736 posts
You might like
-
Shiva
@ShivaSujit -
Amin Mansouri
@m_amin_mansouri -
Jack Stanley
@jackhtstanley -
Ryan D'Orazio
@RyanDOrazio -
Gopeshh Subbaraj
@gopeshh1 -
Daniel Levy
@dnllvy -
Sacha Morin
@SachMorin -
Miguel Saavedra
@miguelSaaRuiz -
Vineet Jain
@thevineetjain -
Nikita Saxena (she/her)
@nikitasaxena02 -
Mehran Shakerinava
@MShakerinava -
Reza Bayat
@reza_byt -
Andrew Williams
@CluelessAndrew -
sachalevy
@sachalevy3
Something went wrong.
Something went wrong.