niru_m's profile picture.

Niru Maheswaranathan

@niru_m

Niru Maheswaranathan heeft deze post opnieuw geplaatst

We’re thrilled to see our advanced ML models and EMG hardware — that transform neural signals controlling muscles at the wrist into commands that seamlessly drive computer interactions — appearing in the latest edition of @Nature. Read the story: nature.com/articles/s4158… Find…


Niru Maheswaranathan heeft deze post opnieuw geplaatst

Excited to speak with this fine gang in St Louis next month, including @neurograce @karimjerbineuro @niru_m @martin_schrimpf @chethan @weixx2 @HombreCerebro . Bring on the future of NeuroAI! transdisciplinaryfutures.wustl.edu/events/neuroai…


How long until there is a feature length film that is completely AI generated? I’m guessing ~2028

here is sora, our video generation model: openai.com/sora today we are starting red-teaming and offering access to a limited number of creators. @_tim_brooks @billpeeb @model_mechanic are really incredible; amazing work by them and the team. remarkable moment.



I'm incredibly saddened today to hear about the passing of Prof. Craig Henriquez, one of my first research advisors when I was an undergrad. Craig was an incredible mentor, scientist, and compassionate human being; he will be deeply missed. today.duke.edu/2023/08/craig-…

niru_m's tweet image. I'm incredibly saddened today to hear about the passing of Prof. Craig Henriquez, one of my first research advisors when I was an undergrad. Craig was an incredible mentor, scientist, and compassionate human being; he will be deeply missed. today.duke.edu/2023/08/craig-…

Niru Maheswaranathan heeft deze post opnieuw geplaatst

1/Our paper @NeuroCellPress "Interpreting the retinal code for natural scenes" develops explainable AI (#XAI) to derive a SOTA deep network model of the retina and *understand* how this net captures natural scenes plus 8 seminal experiments over >2 decades sciencedirect.com/science/articl…

SuryaGanguli's tweet image. 1/Our paper @NeuroCellPress "Interpreting the retinal code for natural scenes" develops explainable AI (#XAI) to derive a SOTA deep network model of the retina and *understand* how this net captures natural scenes plus 8 seminal experiments over >2 decades sciencedirect.com/science/articl…

Sometimes I want to be a computational neuroscientist just so that I can make minimalist talks with only keynote drawings and whatever that cool sans serif font is



Heading to Lisbon for #cosyne2022! Looking forward to talking science in person!

I'll be in Lisbon along with great team of our scientists including @SussilloDavid @niru_m @CLWarriner @diogo_neuro @DiegoGutnisky @najabam. Long overdue to talk science in person!



This gradient might look good in expectation, but what about the variance? 🤔

Gradients without Backpropagation Presents a method to compute gradients based solely on the directional derivative that one can compute exactly and efficiently via the forward mode, entirely eliminating the need for backpropagation in gradient descent. arxiv.org/abs/2202.08587

arankomatsuzaki's tweet image. Gradients without Backpropagation

Presents a method to compute gradients based solely on the directional derivative that one can compute exactly and efficiently via the forward mode, entirely eliminating the need for backpropagation in gradient descent.

arxiv.org/abs/2202.08587


Interesting paper making connections between representational drift in biological brains and dropout in ANNs!

New work out on bioRxiv (biorxiv.org/content/10.110…) on the geometry of representational drift in natural and artificial neural networks. Work done with Marina Garrett (@matchings), Shawn Olsen, and Stefan Mihalas (@Stefan_Mihalas) at the @AllenInstitute. 🧵

KyleJAitken's tweet image. New work out on bioRxiv (biorxiv.org/content/10.110…) on the geometry of representational drift in natural and artificial neural networks. Work done with Marina Garrett (@matchings), Shawn Olsen, and Stefan Mihalas (@Stefan_Mihalas) at the @AllenInstitute. 🧵


Some personal news: I recently joined Facebook Reality Labs, working on neural interfaces research with the CTRL labs team. Sad to leave fantastic colleagues at Google Brain, but looking forward to a new challenge! 🧠 💪


Often in science something seems complicated until you look at it the right way. The hard part is figuring out the right frame of reference!

A classic kinematics example—a pillbug on a spinning disk walks back and forth on (what it thinks is) a straight path. However, its trajectory looks much more complicated and beautiful to a stationary observer!



"Intuition is the foundation upon which comprehensive understanding is built. But ... unverified intuition can be misleading." A fantastic article on rigorous interpretability research by @leavittron and @arimorcos (arxiv.org/pdf/2010.12016…) h/t @KordingLab for highlighting it!


Niru Maheswaranathan heeft deze post opnieuw geplaatst

taylor expansions are the turtles in the turtles all the way down joke except it isn’t a joke


Cool result: minimizing activation norms in a network (a proxy for energy efficiency) in a predictable environment yields a network that learns aspects of predictive coding. Would love to see what happens in richer environments and with deeper architectures!

New preprint alert! We show that predictive coding is an emergent property of input-driven RNNs trained to be energy efficient. No hierarchical hard-wiring required. A thread: 1/ biorxiv.org/content/10.110…



Niru Maheswaranathan heeft deze post opnieuw geplaatst

Are Experts Real? Some thoughts on expertise, credentialism, and feedback loops. fantasticanachronism.com/2021/01/11/are…


The only reason this image is “powerful” is as a reminder of how misleading data visualization can be. It uses a diverging colormap for sequential data, and caps the range at 4% so the UK pops out as if they’ve vaccinated a majority of citizens. Good example of what *not* to do!

Deze tweet is niet langer beschikbaar.

Really enjoyed this post on Stein's paradox! The writing is impressively clear.

I wrote a blog post that provides some intuition behind one of the weirder results in statistics: Stein's paradox. joe-antognini.github.io/machine-learni…



Awesome work! Congrats @basile_cfx et al!

OK, finally our tweeprint for the NeurIPS paper. Here we go. Synaptic plasticity, it's the holy grail of learning and memory. This is work by @basile_cfx, @hisspikeness, @ejagnes, @countzerozz & myself, on how to find the grail, maybe biorxiv.org/content/10.110…



If you need an escape from politics right now, I'm giving a talk at the DeepMath conference (@deepmath1) this afternoon (2:20pm PST) on understanding the dynamics of learned optimizers. There is a livestream: youtube.com/watch?v=x-VPsH… - the talks so far have all been fantastic!

niru_m's tweet card. DeepMath 2020 Day 1

youtube.com

YouTube

DeepMath 2020 Day 1


Loading...

Something went wrong.


Something went wrong.