MaxDavidGupta1's profile picture. a symbolic Bayesian in a continuously differentiable world 
CS @Princeton
Math @Columbia

Max David Gupta

@MaxDavidGupta1

a symbolic Bayesian in a continuously differentiable world CS @Princeton Math @Columbia

You might like

I was about to make fun of my parents for getting excited when trying chatGPT for the first time but then I realized there's tech bros my age who are like this when Cursor drops composer V2.3.2.5


I started writing on Substack! First piece is on how breaking the IID assumptions while training neural networks leads to different learned representational structures. Will try to be posting weekly with short-form updates from experiences and experiments I run at @cocosci_lab


Max David Gupta reposted

I'm excited to share that my new postdoctoral position is going so well that I submitted a new paper at the end of my first week! A thread below

Sensory Compression as a Unifying Principle for Action Chunking and Time Coding in the Brain biorxiv.org/content/10.110… #biorxiv_neursci



Mech interp is great for people who were good at calc, interested in the brain, but too squeamish to become neurosurgeons? Sign me up.


Jung: "Never do human beings speculate more, or have more opinions, than about things which they do not understand" This rings of truth for me today - I'm grateful to be a part of institutions that prefer the scientific method to wanton speculation


Love this take on RL in day-to-day life (mimesis is such a silent killer):

Becoming an RL diehard in the past year and thinking about RL for most of my waking hours inadvertently taught me an important lesson about how to live my own life. One of the big concepts in RL is that you always want to be “on-policy”: instead of mimicking other people’s…



ICML is everyone's chance to revisit the days we peaked in HS multi-variable calc


Max David Gupta reposted

I am starting to think sycophancy is going to be a bigger problem than pure hallucination as LLMs improve. Models that won’t tell you directly when you are wrong (and justify your correctness) are ultimately more dangerous to decision-making than models that are sometimes wrong.


Max David Gupta reposted

𝐍𝐨, 𝐲𝐨𝐮𝐫 𝐛𝐫𝐚𝐢𝐧 𝐝𝐨𝐞𝐬 𝐧𝐨𝐭 𝐩𝐞𝐫𝐟𝐨𝐫𝐦 𝐛𝐞𝐭𝐭𝐞𝐫 𝐚𝐟𝐭𝐞𝐫 𝐋𝐋𝐌 𝐨𝐫 𝐝𝐮𝐫𝐢𝐧𝐠 𝐋𝐋𝐌 𝐮𝐬𝐞. Check our paper: "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" : brainonllm.com


Max David Gupta reposted

🤖🧠Paper out in Nature Communications! 🧠🤖 Bayesian models can learn rapidly. Neural networks can handle messy, naturalistic data. How can we combine these strengths? Our answer: Use meta-learning to distill Bayesian priors into a neural network! nature.com/articles/s4146… 1/n

RTomMcCoy's tweet image. 🤖🧠Paper out in Nature Communications! 🧠🤖

Bayesian models can learn rapidly. Neural networks can handle messy, naturalistic data. How can we combine these strengths?

Our answer: Use meta-learning to distill Bayesian priors into a neural network!

nature.com/articles/s4146…

1/n

can ideas from hard negative mining from contrastive learning play into generating valid counterfactual reasoning paths? or am I way off base? curious to hear what people think


United States Trends

Loading...

Something went wrong.


Something went wrong.