Zhiyong16403503's profile picture. Postdoc at Edinburgh, Ph.D.  at CUHK. Former Visiting Scholar at Cornell. Working on reinforcement learning and multi-armed bandits.

Zhiyong Wang

@Zhiyong16403503

Postdoc at Edinburgh, Ph.D. at CUHK. Former Visiting Scholar at Cornell. Working on reinforcement learning and multi-armed bandits.

Repost di Zhiyong Wang

Excited to announce our NeurIPS ’25 tutorial: Foundations of Imitation Learning: From Language Modeling to Continuous Control With Adam Block & Max Simchowitz (@max_simchowitz)

canondetortugas's tweet image. Excited to announce our NeurIPS ’25 tutorial:

Foundations of Imitation Learning: From Language Modeling to Continuous Control 

With Adam Block & Max Simchowitz (@max_simchowitz)

Repost di Zhiyong Wang

MSR NYC is hiring spring and summer interns in AI/ML/RL!

canondetortugas's tweet image. MSR NYC is hiring spring and summer interns in AI/ML/RL!

Repost di Zhiyong Wang

New in the #DeeperLearningBlog: @GaoZhaolin and collaborators including the #KempnerInstitute's Kianté Brantley presents a powerful new #RL algorithm tailored for reasoning tasks with #LLMs that updates using only one generation per prompt. bit.ly/44US1Mt @xkianteb #AI


Repost di Zhiyong Wang

Delighted to announce that the 2nd edition of our workshop has been accepted to #NeurIPS2025! We have an amazing lineup of speakers: @WenSun1, @ajwagenmaker, @yayitsamyzhang, @MengdiWang10, @nanjiang_cs, Alessandro Lazaric, and a special guest!

arlet_workshop's tweet image. Delighted to announce that the 2nd edition of our workshop has been accepted to #NeurIPS2025!
We have an amazing lineup of speakers:
@WenSun1, @ajwagenmaker, @yayitsamyzhang, @MengdiWang10, @nanjiang_cs, Alessandro Lazaric, and a special guest!

Repost di Zhiyong Wang

How can small LLMs match or even surpass frontier models like DeepSeek R1 and o3 Mini in math competition (AIME & HMMT) reasoning? Prior work seems to suggest that ideas like PRMs do not really work or scale well for long context reasoning. @kaiwenw_ai will reveal how a novel…

I’m presenting two papers on value-based RL for post-training & reasoning on Friday at @ai4mathworkshop at #ICML2025! 1️⃣ Q#: lays theoretical foundations for value-based RL for post-training LMs; 2️⃣ VGS: practical value-guided search scaled up for long CoT reasoning. 🧵👇



Happy to share our work "Provable Zero-Shot Generalization in Offline Reinforcement Learning" at ICML 2025! 📍 Poster | 🗓️July 16, 11:00 AM – 1:30 PM 📌 West Exhibition Hall B2-B3 #W-1012 🤖 How can offline RL agents generalize zero-shot to unseen environments? We introduce…


Repost di Zhiyong Wang

Does RL actually learn positively under random rewards when optimizing Qwen on MATH? Is Qwen really that magical such that even RLing on random rewards can make it reason better? Following prior work on spurious rewards on RL, we ablated algorithms. It turns out that if you…

Recent work has seemed somewhat magical: how can RL with *random* rewards make LLMs reason? We pull back the curtain on these claims and find out this unexpected behavior hinges on the inclusion of certain *heuristics* in the RL algorithm. Our blog post: tinyurl.com/heuristics-con…

g_k_swamy's tweet image. Recent work has seemed somewhat magical: how can RL with *random* rewards make LLMs reason? We pull back the curtain on these claims and find out this unexpected behavior hinges on the inclusion of certain *heuristics* in the RL algorithm. Our blog post: tinyurl.com/heuristics-con…


Repost di Zhiyong Wang

Curious how to combine federated learning and in-context learning for QA tasks — with privacy preservation, efficiency, and boosting performance round by round? 🚀 Meet Fed-ICL — our framework collaboratively refines answers without transmitting model weights or sharing raw…


Repost di Zhiyong Wang

Tired of over-optimized generations that stray too far from the base distribution? We present SLCD: Supervised Learning based Controllable Diffusion, which (provably) solves the KL constrained reward maximization problem for diffusion through supervised learning! (1/n)

owenoertell's tweet image. Tired of over-optimized generations that stray too far from the base distribution?
We present SLCD: Supervised Learning based Controllable Diffusion, which (provably) solves the KL constrained reward maximization problem for diffusion through supervised learning! (1/n)

Repost di Zhiyong Wang

by incorporating self-consistency during offline RL training, we unlock three orthogonal directions of scaling: 1. efficient training (i.e. limit backprop through time) 2. expressive model classes (e.g. flow matching) 3. inference-time scaling (sequential and parallel) which,…


Repost di Zhiyong Wang

Excellently written paper

y0b1byte's tweet image. Excellently written paper

Repost di Zhiyong Wang

I won't be at #ICLR2025 myself this time around but please go talk to lead authors @nico_espinosa_d, @GaoZhaolin, and @runzhe_wu about their bleeding-edge algorithms for imitation learning and RLHF!

g_k_swamy's tweet image. I won't be at #ICLR2025 myself this time around but please go talk to lead authors @nico_espinosa_d, @GaoZhaolin, and @runzhe_wu about their bleeding-edge algorithms for imitation learning and RLHF!

Repost di Zhiyong Wang

Heading to #ICLR2025 🇸🇬! Excited to connect with friends and chat about RL: theory, LLM reasoning and robotics! I will present our Oral paper on LLM self-improvement📍4:18pm Sat. Join me if you want to learn about its scaling laws, iterative training and test-time improvement.

yus167's tweet image. Heading to #ICLR2025 🇸🇬! Excited to connect with friends and chat about RL: theory, LLM reasoning and robotics! 

I will present our Oral paper on LLM self-improvement📍4:18pm Sat. Join me if you want to learn about its scaling laws, iterative training and test-time improvement.

Repost di Zhiyong Wang

What is the place of exploration in today's AI landscape and in which settings can exploration algorithms address current open challenges? Join us to discuss this at our exciting workshop at @icmlconf 2025: EXAIT! exait-workshop.github.io #ICML2025

carlo_sferrazza's tweet image. What is the place of exploration in today's AI landscape and in which settings can exploration algorithms address current open challenges?

Join us to discuss this at our exciting workshop at @icmlconf 2025: EXAIT!

exait-workshop.github.io

#ICML2025

Repost di Zhiyong Wang

I think of misspecification (embodiment / sensory gaps) as the fundamental reason behavioral cloning isn't "all you need" for imitation as matching actions != matching outcomes. Introducing @nico_espinosa_d's #ICLR2025 paper proving that "local search" *is* all you need! [1/n]

g_k_swamy's tweet image. I think of misspecification (embodiment / sensory gaps) as the fundamental reason behavioral cloning isn't "all you need" for imitation as matching actions != matching outcomes. Introducing @nico_espinosa_d's #ICLR2025 paper proving that "local search" *is* all you need! [1/n]

Repost di Zhiyong Wang

Meet the recipients of the 2024 ACM A.M. Turing Award, Andrew G. Barto and Richard S. Sutton! They are recognized for developing the conceptual and algorithmic foundations of reinforcement learning. Please join us in congratulating the two recipients! bit.ly/4hpdsbD


Repost di Zhiyong Wang

🚀 Rising Star Workshops for Junior/Senior PhDs, and Postdocs! 🌟 Don't miss these career-boosting opportunities! notion.so/List-of-Rising… Please share with your peers, students, and anyone who might benefit! #PhD #Postdoc #Academia #RisingStars

ShiLaixi's tweet image. 🚀 Rising Star Workshops for Junior/Senior PhDs, and Postdocs!

🌟 Don't miss these career-boosting opportunities!
notion.so/List-of-Rising…

Please share with your peers, students, and anyone who might benefit! #PhD #Postdoc #Academia #RisingStars

Repost di Zhiyong Wang

There are multiple postdoc positions available as part of an exciting new AI-agent initiative at Columbia that tackles challenges at the frontier of agentic systems and sequential decision-making. I am not very active here so please help me spread the word!


Repost di Zhiyong Wang

Extremely honored to receive this award. Credit goes to my collaborators, mentors, and especially my amazing students! #SloanFellow

🎉Congrats to the 126 early-career scientists who have been awarded a Sloan Research Fellowship this year! These exceptional scholars are drawn from 51 institutions across the US and Canada, and represent the next generation of groundbreaking researchers. sloan.org/fellowships/20…

SloanFoundation's tweet image. 🎉Congrats to the 126 early-career scientists who have been awarded a Sloan Research Fellowship this year! These exceptional scholars are drawn from 51 institutions across the US and Canada, and represent the next generation of groundbreaking researchers. sloan.org/fellowships/20…


Repost di Zhiyong Wang

List of accepted papers for AISTATS 2025 is now available. aistats.org/aistats2025/ Congratulations to the authors and thanks to the reviewers, AC, and SACs for their help. Thanks to my co-chair @ashipra & workflow chairs: Christopher Anders (RIKEN) & Tingting Ou (Columbia).


United States Tendenze

Loading...

Something went wrong.


Something went wrong.