Peter Richtarik
@peter_richtarik
Federated Learning Guru. Tweeting since 20.5.2020. Lived in 🇸🇰🇺🇸🇧🇪🇬🇧🇸🇦
You might like
I feel strongly that, while I understand the challenges they're feeling to run this, that this is the wrong decision. What Arxiv is in practice versus what it is in reality is very different. In practice there are already moderation rules, but they're so minimally enforced (due…
The Computer Science section of @arxiv is now requiring prior peer review for Literature Surveys and Position Papers. Details in a new blog post
Bad move, indeed.
Arxiv has been such a wonderful service but I think this is a step in the wrong direction. We have other venues for peer review. To me the value of arxiv lies precisely in its lack of excessive moderation. I'd prefer it as "github for science," rather than yet another journal.
I firmly believe we are at a watershed moment in the history of mathematics. In the coming years, using LLMs for math research will become mainstream, and so will Lean formalization, made easier by LLMs. (1/4)
I crossed an interesting threshold yesterday, which I think many other mathematicians have been crossing recently as well. In the middle of trying to prove a result, I identified a statement that looked true and that would, if true, be useful to me. 1/3
100% agree on the productivity boost. One just needs patience to correct mistakes, which are more subtle than before imo. I had a nice interaction with GPT-5-pro while proving a convex analysis lemma: arxiv.org/abs/2510.26647 The model didn’t write the full proof, but the…
Totally agree with @ErnestRyu that AI helpers will become very useful for research. But in the near future the biggest help will be with *informal* math, the kind we work out with our collaborators/grad students on a whiteboard. I already use frontier models to help write/debug…
Our research group at the University of Zurich (Switzerland) is seeking a PhD candidate in intersection of theory and practice in areas such as distributed optimization, federated learning, machine learning, privacy, or unlearning. Apply here! apply.mnf.uzh.ch/positiondetail…
Yuri Nesterov is a foundational figure in optimization, best known for Nesterov's accelerated gradient descent (1983). This "momentum" method dramatically speeds up convergence, making it a cornerstone of modern machine learning. He also co-developed the theory of interior-point…
We bridge theory & practice: prior work studies an idealized SVD update. We analyze the implemented inexact (Newton–Schulz) iteration and show how approximation quality shifts the best LR & test on nanoGPT. With @SultanAlra60920 @bremen79 @peter_richtarik arxiv.org/abs/2510.19933
amazing...
Meet the Krause corpuscle, the neuron responsible for sensing vibrations of sexual touch. It is most sensitive to frequencies around 40 to 80 hertz, which is precisely the range of vibrating sex toys. quantamagazine.org/touch-our-most…
Doing optimization for ML/AI? Apply!
Proud of my PhD student!!!
KAUST PhD student Kaja Gruntkowska has been awarded a @Google PhD Fellowship, becoming the first-ever recipient from the GCC countries. Recognized for her work in Algorithms and Optimization, her research advances both the theory and practice of optimization for machine…
Obvious
We’ve found a ton of value hiring folks with strong theory backgrounds with little to no production ML experience. One of our members of technical staff got his phd in pure math/the geometry of black holes and had no prior ML experience. Within days of hiring him we released our…
Explaning the same things twice (or even more times), but differently, makes new concepts and results easier to understand.
strong disagree in mathematical writing in good company with Rudin here
This is not a particularly good take and is indicative of a fundamental misunderstanding of what a top-tier technical college education is suppose to offer. Preparing to understand modern AI as a Harvard or Stanford undergrad is not about learning "prompt engineering", vibe…
Harvard and Stanford students tell me their professors don't understand AI and the courses are outdated. If elite schools can't keep up, the credential arms race is over. Self-learning is the only way now.
Excited to speak at this workshop about our foundational research on Communication-Efficient Model-Parallel Training @PluralisHQ — translating research into practice through LLM training on commodity GPUs over the internet. Honoured to be among such a strong lineup of speakers.
Call for participation: KAUST Workshop on Distributed Training in the Era of Large Models kaust.edu.sa/events/dtelm25/ location: KAUST, Saudi Arabia dates: Nov 24-26, 2025. There will be a chance for some participants to present a poster and/or give a lightning talk.
Unpopular opinion: Finding a simple idea that actually works is way harder than publishing a fancy one that kinda works. You have to fight the urge to overcomplicate, give up many fancier ideas, fail and pivot again and again until you hit the first principle that truly holds.
United States Trends
- 1. Bengals 55.2K posts
- 2. Bengals 55.2K posts
- 3. Packers 56.5K posts
- 4. Panthers 46.5K posts
- 5. Joe Flacco 5,623 posts
- 6. Colts 37.5K posts
- 7. Steelers 54.8K posts
- 8. Lions 62.4K posts
- 9. #KeepPounding 6,968 posts
- 10. FanDuel 45.1K posts
- 11. Falcons 29.4K posts
- 12. Drake London 7,283 posts
- 13. Broncos 33.7K posts
- 14. Daniel Jones 8,830 posts
- 15. Caleb 37.4K posts
- 16. Zac Taylor 2,789 posts
- 17. #Skol 4,724 posts
- 18. Vikings 39.4K posts
- 19. Colston Loveland 7,639 posts
- 20. Jordan Love 9,246 posts
You might like
-
Sebastien Bubeck
@SebastienBubeck -
Gergely Neu
@neu_rips -
Elad Hazan
@HazanPrinceton -
Konstantin Mishchenko
@konstmish -
Zeyuan Allen-Zhu, Sc.D.
@ZeyuanAllenZhu -
Alex Dimakis
@AlexGDimakis -
Amin Karbasi
@aminkarbasi -
Francesco Orabona
@bremen79 -
Sanjeev Arora
@prfsanjeevarora -
Taco Cohen
@TacoCohen -
Andrea Montanari
@Andrea__M -
Sham Kakade
@ShamKakade6 -
Jason Lee
@jasondeanlee -
Behnam Neyshabur
@bneyshabur -
Tom Goldstein
@tomgoldsteincs
Something went wrong.
Something went wrong.