peter_richtarik's profile picture. Federated Learning Guru. Tweeting since 20.5.2020. Lived in 🇸🇰🇺🇸🇧🇪🇬🇧🇸🇦

Peter Richtarik

@peter_richtarik

Federated Learning Guru. Tweeting since 20.5.2020. Lived in 🇸🇰🇺🇸🇧🇪🇬🇧🇸🇦

true

Using AI to help you on your homework is like using a robot to help you lift weights at the gym.



Peter Richtarik 已轉發

Diana was the driving force behind all our variational inference work, and any department would be lucky to have her!

I'm on the academic job market! I design and analyze probabilistic machine-learning methods---motivated by real-world scientific constraints, and developed in collaboration with scientists in biology, chemistry, and physics. A few highlights of my research areas are:

dianarycai's tweet image. I'm on the academic job market!  

I design and analyze probabilistic machine-learning methods---motivated by real-world scientific constraints, and developed in collaboration with scientists in biology, chemistry, and physics.  

A few highlights of my research areas are:


Random photo of KAUST

peter_richtarik's tweet image. Random photo of KAUST

This way of thinking about floats gave rise to the "natural" quantization scheme: arxiv.org/abs/1905.10988 : randomized rounding to the nearest (negative or positive) integer power of 2.

The most intuitive explanation of floats I've ever come across, courtesy of @fabynou fabiensanglard.net/floating_point…

GPU_MODE's tweet image. The most intuitive explanation of floats I've ever come across, courtesy of @fabynou fabiensanglard.net/floating_point…


Peter Richtarik 已轉發

We've just finished some work on improving the sensitivity of Muon to the learning rate, and exploring a lot of design choices. If you want to see how we did this, follow me ....1/x (Work lead by the amazing @CrichaelMawshaw)

gowerrobert's tweet image. We've just finished some work on improving the sensitivity of Muon to the learning rate, and exploring a lot of design choices. If you want to see how we did this, follow me ....1/x (Work lead by the amazing @CrichaelMawshaw)

Random photo of KAUST. (working from this spot today)

peter_richtarik's tweet image. Random photo of KAUST. 

(working from this spot today)

This is literally my father's bathtub right now. Except he has 2 swimmers inside instead. It was 4 several days ago.

non-slavic people thinking this is a shitpost and not real lmao😭😭



Kaja's Google PhD Fellowship featured around KAUST...

peter_richtarik's tweet image. Kaja's Google PhD Fellowship featured around KAUST...

KAUST PhD student Kaja Gruntkowska has been awarded a @Google PhD Fellowship, becoming the first-ever recipient from the GCC countries. Recognized for her work in Algorithms and Optimization, her research advances both the theory and practice of optimization for machine…

KAUST_News's tweet image. KAUST PhD student Kaja Gruntkowska has been awarded a @Google PhD Fellowship, becoming the first-ever recipient from the GCC countries.

Recognized for her work in Algorithms and Optimization, her research advances both the theory and practice of optimization for machine…


Random photo of KAUST.

peter_richtarik's tweet image. Random photo of KAUST.

Peter Richtarik 已轉發

I feel strongly that, while I understand the challenges they're feeling to run this, that this is the wrong decision. What Arxiv is in practice versus what it is in reality is very different. In practice there are already moderation rules, but they're so minimally enforced (due…

The Computer Science section of @arxiv is now requiring prior peer review for Literature Surveys and Position Papers. Details in a new blog post



Bad move, indeed.

Arxiv has been such a wonderful service but I think this is a step in the wrong direction. We have other venues for peer review. To me the value of arxiv lies precisely in its lack of excessive moderation. I'd prefer it as "github for science," rather than yet another journal.



Peter Richtarik 已轉發

I firmly believe we are at a watershed moment in the history of mathematics. In the coming years, using LLMs for math research will become mainstream, and so will Lean formalization, made easier by LLMs. (1/4)


Peter Richtarik 已轉發

I crossed an interesting threshold yesterday, which I think many other mathematicians have been crossing recently as well. In the middle of trying to prove a result, I identified a statement that looked true and that would, if true, be useful to me. 1/3


Peter Richtarik 已轉發

100% agree on the productivity boost. One just needs patience to correct mistakes, which are more subtle than before imo. I had a nice interaction with GPT-5-pro while proving a convex analysis lemma: arxiv.org/abs/2510.26647 The model didn’t write the full proof, but the…

AdilSlm's tweet image. 100% agree on the productivity boost. One just needs patience to correct mistakes, which are more subtle than before imo.

I had a nice interaction with GPT-5-pro while proving a convex analysis lemma: arxiv.org/abs/2510.26647

The model didn’t write the full proof, but the…

Totally agree with @ErnestRyu that AI helpers will become very useful for research. But in the near future the biggest help will be with *informal* math, the kind we work out with our collaborators/grad students on a whiteboard. I already use frontier models to help write/debug…



Peter Richtarik 已轉發

Our research group at the University of Zurich (Switzerland) is seeking a PhD candidate in intersection of theory and practice in areas such as distributed optimization, federated learning, machine learning, privacy, or unlearning. Apply here! apply.mnf.uzh.ch/positiondetail…


Peter Richtarik 已轉發

Yuri Nesterov is a foundational figure in optimization, best known for Nesterov's accelerated gradient descent (1983). This "momentum" method dramatically speeds up convergence, making it a cornerstone of modern machine learning. He also co-developed the theory of interior-point…

probnstat's tweet image. Yuri Nesterov is a foundational figure in optimization, best known for Nesterov's accelerated gradient descent (1983). This "momentum" method dramatically speeds up convergence, making it a cornerstone of modern machine learning. He also co-developed the theory of interior-point…

Peter Richtarik 已轉發

We bridge theory & practice: prior work studies an idealized SVD update. We analyze the implemented inexact (Newton–Schulz) iteration and show how approximation quality shifts the best LR & test on nanoGPT. With @SultanAlra60920 @bremen79 @peter_richtarik arxiv.org/abs/2510.19933


amazing...

Meet the Krause corpuscle, the neuron responsible for sensing vibrations of sexual touch. It is most sensitive to frequencies around 40 to 80 hertz, which is precisely the range of vibrating sex toys. quantamagazine.org/touch-our-most…

QuantaMagazine's tweet image. Meet the Krause corpuscle, the neuron responsible for sensing vibrations of sexual touch. It is most sensitive to frequencies around 40 to 80 hertz, which is precisely the range of vibrating sex toys. 
quantamagazine.org/touch-our-most…


Doing optimization for ML/AI? Apply!

此推文已無法使用。

Random photo of KAUST

peter_richtarik's tweet image. Random photo of KAUST

Loading...

Something went wrong.


Something went wrong.