peter_richtarik's profile picture. Federated Learning Guru. Tweeting since 20.5.2020. Lived in 🇸🇰🇺🇸🇧🇪🇬🇧🇸🇦

Peter Richtarik

@peter_richtarik

Federated Learning Guru. Tweeting since 20.5.2020. Lived in 🇸🇰🇺🇸🇧🇪🇬🇧🇸🇦

Peter Richtarik reposted

I feel strongly that, while I understand the challenges they're feeling to run this, that this is the wrong decision. What Arxiv is in practice versus what it is in reality is very different. In practice there are already moderation rules, but they're so minimally enforced (due…

The Computer Science section of @arxiv is now requiring prior peer review for Literature Surveys and Position Papers. Details in a new blog post



Bad move, indeed.

Arxiv has been such a wonderful service but I think this is a step in the wrong direction. We have other venues for peer review. To me the value of arxiv lies precisely in its lack of excessive moderation. I'd prefer it as "github for science," rather than yet another journal.



Peter Richtarik reposted

I firmly believe we are at a watershed moment in the history of mathematics. In the coming years, using LLMs for math research will become mainstream, and so will Lean formalization, made easier by LLMs. (1/4)


Peter Richtarik reposted

I crossed an interesting threshold yesterday, which I think many other mathematicians have been crossing recently as well. In the middle of trying to prove a result, I identified a statement that looked true and that would, if true, be useful to me. 1/3


Peter Richtarik reposted

100% agree on the productivity boost. One just needs patience to correct mistakes, which are more subtle than before imo. I had a nice interaction with GPT-5-pro while proving a convex analysis lemma: arxiv.org/abs/2510.26647 The model didn’t write the full proof, but the…

AdilSlm's tweet image. 100% agree on the productivity boost. One just needs patience to correct mistakes, which are more subtle than before imo.

I had a nice interaction with GPT-5-pro while proving a convex analysis lemma: arxiv.org/abs/2510.26647

The model didn’t write the full proof, but the…

Totally agree with @ErnestRyu that AI helpers will become very useful for research. But in the near future the biggest help will be with *informal* math, the kind we work out with our collaborators/grad students on a whiteboard. I already use frontier models to help write/debug…



Peter Richtarik reposted

Our research group at the University of Zurich (Switzerland) is seeking a PhD candidate in intersection of theory and practice in areas such as distributed optimization, federated learning, machine learning, privacy, or unlearning. Apply here! apply.mnf.uzh.ch/positiondetail…


Peter Richtarik reposted

Yuri Nesterov is a foundational figure in optimization, best known for Nesterov's accelerated gradient descent (1983). This "momentum" method dramatically speeds up convergence, making it a cornerstone of modern machine learning. He also co-developed the theory of interior-point…

probnstat's tweet image. Yuri Nesterov is a foundational figure in optimization, best known for Nesterov's accelerated gradient descent (1983). This "momentum" method dramatically speeds up convergence, making it a cornerstone of modern machine learning. He also co-developed the theory of interior-point…

Peter Richtarik reposted

We bridge theory & practice: prior work studies an idealized SVD update. We analyze the implemented inexact (Newton–Schulz) iteration and show how approximation quality shifts the best LR & test on nanoGPT. With @SultanAlra60920 @bremen79 @peter_richtarik arxiv.org/abs/2510.19933


amazing...

Meet the Krause corpuscle, the neuron responsible for sensing vibrations of sexual touch. It is most sensitive to frequencies around 40 to 80 hertz, which is precisely the range of vibrating sex toys. quantamagazine.org/touch-our-most…

QuantaMagazine's tweet image. Meet the Krause corpuscle, the neuron responsible for sensing vibrations of sexual touch. It is most sensitive to frequencies around 40 to 80 hertz, which is precisely the range of vibrating sex toys. 
quantamagazine.org/touch-our-most…


Doing optimization for ML/AI? Apply!

This post is unavailable.

Random photo of KAUST

peter_richtarik's tweet image. Random photo of KAUST

Proud of my PhD student!!!

KAUST PhD student Kaja Gruntkowska has been awarded a @Google PhD Fellowship, becoming the first-ever recipient from the GCC countries. Recognized for her work in Algorithms and Optimization, her research advances both the theory and practice of optimization for machine…

KAUST_News's tweet image. KAUST PhD student Kaja Gruntkowska has been awarded a @Google PhD Fellowship, becoming the first-ever recipient from the GCC countries.

Recognized for her work in Algorithms and Optimization, her research advances both the theory and practice of optimization for machine…


Random photo of KAUST

peter_richtarik's tweet image. Random photo of KAUST

Obvious

We’ve found a ton of value hiring folks with strong theory backgrounds with little to no production ML experience. One of our members of technical staff got his phd in pure math/the geometry of black holes and had no prior ML experience. Within days of hiring him we released our…



Explaning the same things twice (or even more times), but differently, makes new concepts and results easier to understand.

strong disagree in mathematical writing in good company with Rudin here

aryehazan's tweet image. strong disagree in mathematical writing 

in good company with Rudin here


Peter Richtarik reposted

This is not a particularly good take and is indicative of a fundamental misunderstanding of what a top-tier technical college education is suppose to offer. Preparing to understand modern AI as a Harvard or Stanford undergrad is not about learning "prompt engineering", vibe…

Harvard and Stanford students tell me their professors don't understand AI and the courses are outdated. If elite schools can't keep up, the credential arms race is over. Self-learning is the only way now.



Peter Richtarik reposted

Excited to speak at this workshop about our foundational research on Communication-Efficient Model-Parallel Training @PluralisHQ — translating research into practice through LLM training on commodity GPUs over the internet. Honoured to be among such a strong lineup of speakers.

Call for participation:  KAUST Workshop on Distributed Training in the Era of Large Models kaust.edu.sa/events/dtelm25/ location: KAUST, Saudi Arabia dates: Nov 24-26, 2025. There will be a chance for some participants to present a poster and/or give a lightning talk.

peter_richtarik's tweet image. Call for participation: 

KAUST Workshop on Distributed Training in the Era of Large Models

kaust.edu.sa/events/dtelm25/

location: KAUST, Saudi Arabia
dates: Nov 24-26, 2025. 

There will be a chance for some participants to present a poster and/or give a lightning talk.


Peter Richtarik reposted

Unpopular opinion: Finding a simple idea that actually works is way harder than publishing a fancy one that kinda works. You have to fight the urge to overcomplicate, give up many fancier ideas, fail and pivot again and again until you hit the first principle that truly holds.


Loading...

Something went wrong.


Something went wrong.