gauravkmr__'s profile picture. Junior Research Fellow at IISER Pune. Complex Networks. MSc Physics, IIT Gandhinagar.

Gaurav Kumar

@gauravkmr__

Junior Research Fellow at IISER Pune. Complex Networks. MSc Physics, IIT Gandhinagar.

Gaurav Kumar さんがリポスト

To paraphrase something I tell my students, especially when they are starting a research project - There are certainly many people in the world who think better than us. But the competition reduces when it comes to the people who take their thoughts and 'do' something with them.…


Gaurav Kumar さんがリポスト

All these things were not enough for Air India. After torturous treatment from Air India, I somehow landed in JFK. Surprise, Guess what, they have conveniently off loaded more than 100 checked in luggage, including mine, in Delhi itself.

I was booked on AI101 from Delhi to JFK on 21 June. Few hours before the departure, I got a message that my itinerary had changed due to “*unforeseen operational reasons*.” I was scared as operational issues in AI after the past incident immediately raised safety concerns. (1/7)



Gaurav Kumar さんがリポスト

I love chaos. Made with #python #numba #numpy #matplotlib

S_Conradi's tweet image. I love chaos.
Made with #python #numba #numpy #matplotlib

Gaurav Kumar さんがリポスト

Major update on factoring primes with Shor’s algorithm from Craig Gidney at #Google. - reduced physical #qubit count from 20e6 to 1e6 to break a RSA-2048 bit key, using yoked surface codes, magic state cultivation and some efficient arithmetic. arxiv.org/abs/2505.15917 #quantum


Gaurav Kumar さんがリポスト

Many recent posts on free energy. Here is a summary from my class “Statistical mechanics of learning and computation” on the many relations between free energy, KL divergence, large deviation theory, entropy, Boltzmann distribution, cumulants, Legendre duality, saddle points,…

SuryaGanguli's tweet image. Many recent posts on free energy. Here is a summary from my class “Statistical mechanics of learning and computation” on the many relations between free energy, KL divergence, large deviation theory, entropy, Boltzmann distribution, cumulants, Legendre duality, saddle points,…

Gaurav Kumar さんがリポスト

Exactly how I like learning about this stuff. MCMC is not a difficult concept to understand if you have the right person explain it to you.

omarsar0's tweet image. Exactly how I like learning about this stuff. 

MCMC is not a difficult concept to understand if you have the right person explain it to you.

Gaurav Kumar さんがリポスト

The mechanism behind double descent in ML (specifically in ridgeless least squares regression) is not just similar but _identical_ to that which in physics causes massless 4D phi^4 theory to go from being classically scale-free to picking up a scale/mass in the infrared.

Does anyone have any random fun facts about a very niche subject. I'm bored and love learning random things



Gaurav Kumar さんがリポスト

The Rothschild family’s mansion in Vienna was seized by the Nazis in 1938 and used as Eichmann’s headquarters. After the war, the Austrian government refused to return it. Still haven't. Instead they pressured the family into 'donating' their property. Same with many other Jews.


Gaurav Kumar さんがリポスト

David Tong provides his beautiful, insightful, original lectures on physics free of charge. His latest gift to us is on mathematical biology. damtp.cam.ac.uk/user/tong/math…


Gaurav Kumar さんがリポスト

The Unreasonable Effectiveness of Linear Algebra. If you really want to understand what’s happening in machine learning/AI, coming to grips with the basics of linear algebra is super important. I continue to be blown away by how much of ML becomes increasingly intuitive as one…

anilananth's tweet image. The Unreasonable Effectiveness of Linear Algebra. If you really want to understand what’s happening in machine learning/AI, coming to grips with the basics of linear algebra is super important.

I continue to be blown away by how much of ML becomes increasingly intuitive as one…
anilananth's tweet image. The Unreasonable Effectiveness of Linear Algebra. If you really want to understand what’s happening in machine learning/AI, coming to grips with the basics of linear algebra is super important.

I continue to be blown away by how much of ML becomes increasingly intuitive as one…

Gaurav Kumar さんがリポスト

Been working on "Magic state cultivation: growing T states as cheap as CNOT gates" all year. It's finally out: arxiv.org/abs/2409.17595 The reign of the T gate is coming to an end. It's now nearly the cost of a lattice surgery CNOT gate, and I bet there's more improvements yet.

CraigGidney's tweet image. Been working on "Magic state cultivation: growing T states as cheap as CNOT gates" all year. It's finally out: arxiv.org/abs/2409.17595

The reign of the T gate is coming to an end. It's now nearly the cost of a lattice surgery CNOT gate, and I bet there's more improvements yet.
CraigGidney's tweet image. Been working on "Magic state cultivation: growing T states as cheap as CNOT gates" all year. It's finally out: arxiv.org/abs/2409.17595

The reign of the T gate is coming to an end. It's now nearly the cost of a lattice surgery CNOT gate, and I bet there's more improvements yet.
CraigGidney's tweet image. Been working on "Magic state cultivation: growing T states as cheap as CNOT gates" all year. It's finally out: arxiv.org/abs/2409.17595

The reign of the T gate is coming to an end. It's now nearly the cost of a lattice surgery CNOT gate, and I bet there's more improvements yet.

Gaurav Kumar さんがリポスト

10 years ago Kelly et al showed bigger rep code circuits do better. The dream was to do the same for a full quantum code. This year we finally did it. arxiv.org/abs/2408.13687 has d=5 surface codes twice as good as d=3. And d=7 twice as good again, outliving the physical qubits.

CraigGidney's tweet image. 10 years ago Kelly et al showed bigger rep code circuits do better. The dream was to do the same for a full quantum code.

This year we finally did it. arxiv.org/abs/2408.13687 has d=5 surface codes twice as good as d=3. And d=7 twice as good again, outliving the physical qubits.

Gaurav Kumar さんがリポスト

For anyone curious about nonlinear dynamics and chaos, including students taking a first course in the subject: My lectures are freely available here. m.youtube.com/playlist?list=…


Gaurav Kumar さんがリポスト

"Statistical Laws in Complex Systems", new pre-print monograph covering the history, traditional use, and modern debates in this topic. arxiv.org/abs/2407.19874 Comments and suggestions are welcome.

EduardoGAltmann's tweet image. "Statistical Laws in Complex Systems", new pre-print monograph covering the history, traditional use, and modern debates in this topic.

arxiv.org/abs/2407.19874

Comments and suggestions are welcome.
EduardoGAltmann's tweet image. "Statistical Laws in Complex Systems", new pre-print monograph covering the history, traditional use, and modern debates in this topic.

arxiv.org/abs/2407.19874

Comments and suggestions are welcome.

Gaurav Kumar さんがリポスト

"The Ising model celebrates a century of interdisciplinary contributions" I studied the Ising model in my phd in statistical physics. It gave me concepts to talk with complexity folks in political science, psychology and more. Any good Ising stories? nature.com/articles/s4426…


Gaurav Kumar さんがリポスト

Can we tell apart short-time & exponential-time quantum dynamics? In arxiv.org/abs/2407.07754, we found the answer to be "No" even though it seems easy. This discovery leads to new implications in quantum advantages, faster learning, hardness of recognizing phases of matter ...

RobertHuangHY's tweet image. Can we tell apart short-time & exponential-time quantum dynamics?

In arxiv.org/abs/2407.07754, we found the answer to be "No" even though it seems easy.

This discovery leads to new implications in quantum advantages, faster learning, hardness of recognizing phases of matter ...

Gaurav Kumar さんがリポスト

The Laplacian of a graph is (up to a sign change...) a positive semi-definite operator. en.wikipedia.org/wiki/Laplacian…

gabrielpeyre's tweet image. The Laplacian of a graph is (up to a sign change...) a positive semi-definite operator.  en.wikipedia.org/wiki/Laplacian…

Gaurav Kumar さんがリポスト

Hopfield networks are a foundational idea in both neuroscience & ML 🧠💻 The latest video introduces the world of energy-based models and Hebbian learning 📹 A first step to covering Boltzmann/Helmholtz machines, Predictive coding & Free Energy later :) youtu.be/1WPJdAW-sFo?si…

ArtemKRSV's tweet card. A Brain-Inspired Algorithm For Memory

youtube.com

YouTube

A Brain-Inspired Algorithm For Memory


Gaurav Kumar さんがリポスト

In our new preprint, we explore how the economy can be analysed through the lens of complexity theory, using networks and simulations. researchgate.net/publication/38… #complexity #networks #simulation #economics #complexnetworks #research #datascience

FranciscoICMC's tweet image. In our new preprint, we explore how the economy  can be analysed through the lens of complexity theory, using networks  and simulations.
researchgate.net/publication/38…
#complexity #networks #simulation #economics #complexnetworks #research #datascience

United States トレンド

Loading...

Something went wrong.


Something went wrong.