compneuro_epfl's profile picture. The Laboratory of Computational Neuroscience @EPFL_en studies models of #neurons, #networks of #neurons, #synapticplasticity, and #learning in the brain.

Gerstner Lab

@compneuro_epfl

The Laboratory of Computational Neuroscience @EPFL_en studies models of #neurons, #networks of #neurons, #synapticplasticity, and #learning in the brain.

Épinglé

Our latest results (with @nickyclayton22) is now out in @NatureComms: doi.org/10.1038/s41467… 🥳 We propose a model of *28* behavioral experiments with food caching jays using a *single* neural network equipped with episodic-like memory and 3-factor RL plasticity rules. 1/6


The LCN is gradually moving away from X. You can follow our most recent news on our new account on BlueSky at gerstnerlab.bsky.social


Gerstner Lab a reposté

If you're at #foragingconference2024 , come check out our poster (#60) with @modirshanechi and @compneuro_epfl today! Using a unified computational framework and two open-access datasets, we show how novelty and novelty-guided behaviors are influenced by stimulus similarities😊🤩


Gerstner Lab a reposté

I'm thrilled to share that I was recently awarded the @EPFL_en Dimitris N. Chorafas Foundation Award for my Ph.D. thesis, "Seeking the new, learning from the unexpected: Computational models of #surprise and #novelty in the #brain." Award news: actu.epfl.ch/news/dimitris-…

modirshanechi's tweet image. I'm thrilled to share that I was recently awarded the @EPFL_en Dimitris N. Chorafas Foundation Award for my Ph.D. thesis, "Seeking the new, learning from the unexpected: Computational models of  #surprise and #novelty in the #brain." 

Award news: actu.epfl.ch/news/dimitris-…
modirshanechi's tweet image. I'm thrilled to share that I was recently awarded the @EPFL_en Dimitris N. Chorafas Foundation Award for my Ph.D. thesis, "Seeking the new, learning from the unexpected: Computational models of  #surprise and #novelty in the #brain." 

Award news: actu.epfl.ch/news/dimitris-…

Gerstner Lab a reposté

Episode #22 in #TheoreticalNeurosciencePodcast: On 50 years with the Hopfield network model - with Wulfram Gerstner @compneuro_epfl theoreticalneuroscience.no/thn22 John Hopfield received the 2024 Physics Nobel prize for his model published in 1982. What is the model all about?

GauteEinevoll's tweet image. Episode #22 in  #TheoreticalNeurosciencePodcast:  On 50 years with the Hopfield network model - with Wulfram Gerstner  @compneuro_epfl 

theoreticalneuroscience.no/thn22 

John Hopfield received the 2024 Physics Nobel prize for his model published in 1982.  What is the model all about?

Gerstner Lab a reposté

📢 I'm on the faculty job market this year! My research explores the foundations of deep learning and analyzes learning and feature geometry for Gaussian inputs. I detail my major contributions👇Retweet if you find it interesting and help me spread the word! DM is open. 1/n


Gerstner Lab a reposté

🚨Preprint alert🚨 In an amazing collaboration with @GruazL53069, @sobeckerneuro, & J Brea, we explored a major puzzle in neuroscience & psychology: *What are the merits of curiosity⁉️* osf.io/preprints/psya… 1/7


Gerstner Lab a reposté

Headed to @BernsteinNeuro Conference this weekend and interested in how biological computation is performed across different scales from single neurons to populations and whole-brain and even astrocytes and the whole body, drop by our workshop co-organized w/ @neuroprinciples

roxana_zeraati's tweet image. Headed to @BernsteinNeuro Conference this weekend and interested in how biological computation is performed across different scales from single neurons to populations and whole-brain and even astrocytes and 
 the whole body, drop by our workshop co-organized w/ @neuroprinciples

Gerstner Lab a reposté

1. Synaptic weight scaling in O(1/N) self-induces a form of (implicit) spatial structure in networks of spiking neurons, as the number of neurons N tends to infinity. This is what D.T. Zhou, P.-E. Jabin and I prove in arxiv.org/abs/2409.06325.


Gerstner Lab a reposté

Next Monday, I'll present how we exploit symmetries to identify weights of a black-box network to the EfficientML reading group 📒 Have a look if interested in Expand-and-Cluster: sites.google.com/view/efficient… Thanks @osaukh for the invite!

📕Recovering network weights from a set of input-output neural activations 👀 Ever wondered if this is even possible? 🤔 Check out Expand-and-Cluster, our latest paper at #ICML2024! Thu. 11:30 #2713 proceedings.mlr.press/v235/martinell… A thread 🧵 ⚠️ Loss landscape and symmetries ahead ⚠️



Gerstner Lab a reposté

And it's a book! Together with @okaysteve, we have gathered some of the leading experts in the field who have generously contributed with a chapter of what has become the first ever book on #engram biology! 📖🔥🧠Come take a look! ⬇️⬇️⬇️ link.springer.com/book/10.1007/9…

GraeffJohannes's tweet image. And it's a book! Together with @okaysteve, we have gathered some of the leading experts in the field who have generously contributed with a chapter of what has become the first ever book on #engram biology! 📖🔥🧠Come take a look! ⬇️⬇️⬇️

link.springer.com/book/10.1007/9…

Gerstner Lab a reposté

Approximation-free training method for deep SNNs using time-to-first-spike coding.

Today in @NatureComms . 📝 Open-puzzle: training event-based spiking neurons is mysteriously impossible. @Ana__Stan 👩🏻‍🔬 shows it become possible using theoretical equivalence between ReLU CNN and event-based CNN. Congrats ! 🧵 nature.com/articles/s4146…



Gerstner Lab a reposté

Today in @NatureComms . 📝 Open-puzzle: training event-based spiking neurons is mysteriously impossible. @Ana__Stan 👩🏻‍🔬 shows it become possible using theoretical equivalence between ReLU CNN and event-based CNN. Congrats ! 🧵 nature.com/articles/s4146…


Gerstner Lab a reposté

📕Recovering network weights from a set of input-output neural activations 👀 Ever wondered if this is even possible? 🤔 Check out Expand-and-Cluster, our latest paper at #ICML2024! Thu. 11:30 #2713 proceedings.mlr.press/v235/martinell… A thread 🧵 ⚠️ Loss landscape and symmetries ahead ⚠️


Gerstner Lab a reposté

Excited to share a blog post on our recent work (arxiv.org/abs/2311.01644) on neural network distillation bsimsek.com/post/copy-aver… If you liked toy models of superposition or pizza and clock papers, you might enjoy reading this blog post!


Gerstner Lab a reposté

Normative theories show that a surprise signal is necessary to speed up learning after an abrupt change in the environment; but how can such a speed-up be implemented in the brain? 🧠 We make a proposition in our new paper in @PLOSCompBiol. doi.org/10.1371/journa…


Gerstner Lab a reposté

What do we talk about when we talk about "curiosity"? 🤔 In our new paper in @TrendsNeuro (with @KacperKond, @compneuro_epfl & @sebhaesler), we address this question by reviewing the behavioral signatures, neural mechanisms, and comp. models of curiosity: doi.org/10.1016/j.tins…

modirshanechi's tweet image. What do we talk about when we talk about "curiosity"? 🤔
In our new paper in @TrendsNeuro (with @KacperKond, @compneuro_epfl & @sebhaesler), we address this question by reviewing the behavioral signatures, neural mechanisms, and comp. models of curiosity:
doi.org/10.1016/j.tins…

Gerstner Lab a reposté

Excited that our new position piece is out! In this article, @summerfieldlab and I review three recent advances in using deep RL to model cognitive flexibility, a hallmark of human cognition: sciencedirect.com/science/articl… (1/4)


Gerstner Lab a reposté

Intriguing new paper from the Gerstner lab proposes a theory for sparse coding and synaptic plasticity in cortical networks to overcome spurious input correlations. doi.org/10.1371/journa…

Most methods of sparse coding or ICA assume the 'pre-whitening' of inputs. @cstein06 shows that this is not necessary with a smart local Hebbian learning rule and ReLU neurons! Paper just out in @PLOSCompBiol: doi.org/10.1371/journa…



Loading...

Something went wrong.


Something went wrong.