SaxeLab's profile picture. Prof at @GatsbyUCL and @SWC_Neuro, trying to figure out how we learn. 
Bluesky: @SaxeLab
Mastodon: @SaxeLab@sigmoid.social

Andrew Saxe

@SaxeLab

Prof at @GatsbyUCL and @SWC_Neuro, trying to figure out how we learn. Bluesky: @SaxeLab Mastodon: @[email protected]

Pinned

How does in-context learning emerge in attention models during gradient descent training? Sharing our new Spotlight paper @icmlconf: Training Dynamics of In-Context Learning in Linear Attention arxiv.org/abs/2501.16265 Led by Yedi Zhang with @Aaditya6284 and Peter Latham


Andrew Saxe reposted

📢 Job alert We are looking for a Postdoctoral Fellow to work with @ArthurGretton on creating statistically efficient causal and interaction models with the aim of elucidating cellular interactions. ⏰Deadline 27-Aug-2025 ℹ️ ucl.ac.uk/work-at-ucl/se…


Andrew Saxe reposted

🎓Thrilled to share I’ve officially defended my PhD!🥳 At @GatsbyUCL, my research explored how prior knowledge shapes neural representations. I’m deeply grateful to my mentors, @SaxeLab and Caswell Barry, my incredible collaborators, and everyone who supported me! Stay tuned!

ClementineDomi6's tweet image. 🎓Thrilled to share I’ve officially defended my PhD!🥳

At @GatsbyUCL, my research explored how prior knowledge shapes neural representations.

I’m deeply grateful to my mentors, @SaxeLab and Caswell Barry, my incredible collaborators, and everyone who supported me!

Stay tuned!
ClementineDomi6's tweet image. 🎓Thrilled to share I’ve officially defended my PhD!🥳

At @GatsbyUCL, my research explored how prior knowledge shapes neural representations.

I’m deeply grateful to my mentors, @SaxeLab and Caswell Barry, my incredible collaborators, and everyone who supported me!

Stay tuned!

Andrew Saxe reposted

If you’re working on symmetry and geometry in neural representations, submit your work to NeurReps and join the community in San Diego ! 🤩 Deadline August 22nd.

Are you studying how structure shapes computation in the brain and in AI systems? 🧠 Come share your work in San Diego at NeurReps 2025! There is one month left until the submission deadline on August 22: neurreps.org/call-for-papers



Andrew Saxe reposted

If you can see it, you can feel it! Thrilled to share our new @NatureComms paper on how mice generalize spatial rules between vision & touch, led by brilliant co-first authors @giulio_matt & @GtnMaelle. More details in this thread 🧵 (1/7) doi.org/10.1038/s41467…

elboustanilab's tweet image. If you can see it, you can feel it!
Thrilled to share our new @NatureComms paper on how mice generalize spatial rules between vision & touch, led by brilliant co-first authors @giulio_matt & @GtnMaelle.
More details in this thread 🧵 (1/7)
 doi.org/10.1038/s41467…

Andrew Saxe reposted

🥳 Congratulations to Rodrigo Carrasco-Davison on passing his PhD viva with minor corrections! 🎉 📜 Principles of Optimal Learning Control in Biological and Artificial Agents.

GatsbyUCL's tweet image. 🥳 Congratulations to Rodrigo Carrasco-Davison on passing his PhD viva with minor corrections! 🎉

📜 Principles of Optimal Learning Control in Biological and Artificial Agents.

Come chat about this at the poster @icmlconf, 11:00-13:30 on Wednesday in the West Exhibition Hall #W-902!

How does in-context learning emerge in attention models during gradient descent training? Sharing our new Spotlight paper @icmlconf: Training Dynamics of In-Context Learning in Linear Attention arxiv.org/abs/2501.16265 Led by Yedi Zhang with @Aaditya6284 and Peter Latham



Andrew Saxe reposted

👋 Attending #ICML2025 next week? Don't forget to check out work involving our researchers!

GatsbyUCL's tweet image. 👋 Attending #ICML2025 next week? Don't forget to check out work involving our researchers!

Andrew Saxe reposted

Excited to present this work in Vancouver at #ICML2025 today 😀 Come by to hear about why in-context learning emerges and disappears: Talk: 10:30-10:45am, West Ballroom C Poster: 11am-1:30pm, East Exhibition Hall A-B # E-3409

Transformers employ different strategies through training to minimize loss, but how do these tradeoff and why? Excited to share our newest work, where we show remarkably rich competitive and cooperative interactions (termed "coopetition") as a transformer learns. Read on 🔎⏬

Aaditya6284's tweet image. Transformers employ different strategies through training to minimize loss, but how do these tradeoff and why?

Excited to share our newest work, where we show remarkably rich competitive and cooperative interactions (termed "coopetition") as a transformer learns.

Read on 🔎⏬


Andrew Saxe reposted

How do task dynamics impact learning in networks with internal dynamics? Excited to share our ICML Oral paper on learning dynamics in linear RNNs! with @ClementineDomi6 @mpshanahan @PedroMediano openreview.net/forum?id=KGOcr…


Loading...

Something went wrong.


Something went wrong.