eric_srchen's profile picture. PhD in Stanford CS, Prev Undergrad at HKU. Interested in robotics

Sirui Chen

@eric_srchen

PhD in Stanford CS, Prev Undergrad at HKU. Interested in robotics

مثبتة

How can we collect high-quality robot data without teleoperation? AR can help! Introducing ARCap, a fully open-sourced AR solution for collecting cross-embodiment robot data (gripper and dex hand) directly using human hands. 🌐:stanford-tml.github.io/ARCap/ 📜:arxiv.org/abs/2410.08464


Sirui Chen أعاد

Spot is playing Ping Pong! Spin is a crucial part of the game, but few robots can handle it. We show receiving and generating significant spin using MPC. Collaboration with David Nguyen and Zulfiqar Zaidi! Video: youtu.be/3GrnkxOeC14?si…. Paper: arxiv.org/pdf/2510.08754.

zhaomingxie's tweet card. Whole Body Model Predictive Control for Spin-Aware Quadrupedal Table...

youtube.com

YouTube

Whole Body Model Predictive Control for Spin-Aware Quadrupedal Table...


Wild! Great work in retargeting!

Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing 𝗢𝗺𝗻𝗶𝗥𝗲𝘁𝗮𝗿𝗴𝗲𝘁🎯, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with 𝗺𝗶𝗻𝗶𝗺𝗮𝗹 RL: - 5 rewards, - 4 DR…



Sirui Chen أعاد

If you missed @yukez’s talk at #CoRL2025 here is the link youtube.com/watch?v=rh2oxU… 👇demo we at GEAR have been cranking at: fully autonomous, human like, locomanipulation via language + vision input. Uncut. Sleepless nights to get the humanoid to move naturally pays off🥹


Sirui Chen أعاد

Excited to share our latest work MaskedManipulator (proc. @SIGGRAPHAsia 2025)! With: Yifeng Jiang, @erwincoumans, @zhengyiluo, @GalChechik, and @xbpeng4

ChenTessler's tweet image. Excited to share our latest work MaskedManipulator (proc. @SIGGRAPHAsia 2025)!

With: Yifeng Jiang, @erwincoumans, @zhengyiluo, @GalChechik, and @xbpeng4

Sirui Chen أعاد

🏓🤖 Our humanoid robot can now rally over 100 consecutive shots against a human in real table tennis — fully autonomous, sub-second reaction, human-like strikes.


Sirui Chen أعاد

ToddlerBot is accepted to CoRL, and we will bring Toddy (2.0 version) to Seoul for a trip. Come and say hi to Toddy if you're around😁! Our arxiv paper is also updated with more technical details in the appendix: arxiv.org/abs/2502.00893

HaochenShi74's tweet image. ToddlerBot is accepted to CoRL, and we will bring Toddy (2.0 version) to Seoul for a trip. Come and say hi to Toddy if you're around😁! Our arxiv paper is also updated with more technical details in the appendix: arxiv.org/abs/2502.00893
HaochenShi74's tweet image. ToddlerBot is accepted to CoRL, and we will bring Toddy (2.0 version) to Seoul for a trip. Come and say hi to Toddy if you're around😁! Our arxiv paper is also updated with more technical details in the appendix: arxiv.org/abs/2502.00893

Sirui Chen أعاد

Delivering the robot close enough to a target is an important yet often overlooked prerequisite for any meaningful robot interaction. It requires robust locomotion, navigation, and reaching all at once. HeAD is an automatic vision-based system that handles all of them.

Introducing HEAD🤖, an autonomous navigation and reaching system for humanoid robots, which allows the robot to navigate around obstacles and touch an object in the environment. More details on our website and CoRL paper: stanford-tml.github.io/HEAD



Sirui Chen أعاد

Every second eight new posts hit my screens. So many that we can't think about what it all means. What does it mean? Robot learning is speeding up. Stay alive, we are about to see wonderous things.

Introducing HEAD🤖, an autonomous navigation and reaching system for humanoid robots, which allows the robot to navigate around obstacles and touch an object in the environment. More details on our website and CoRL paper: stanford-tml.github.io/HEAD



Sirui Chen أعاد

excellent work on whole-body reaching, with an intuitive modular approach! also, great to see my former labmate @yufei_ye collecting data in-the-wild with Aria glasses 🙂

Introducing HEAD🤖, an autonomous navigation and reaching system for humanoid robots, which allows the robot to navigate around obstacles and touch an object in the environment. More details on our website and CoRL paper: stanford-tml.github.io/HEAD



Sirui Chen أعاد

How do we learn motor skills directly in the real world? Think about learning to ride a bike—parents might be there to give you hands-on guidance.🚲 Can we apply this same idea to robots? Introducing Robot-Trains-Robot (RTR): a new framework for real-world humanoid learning.


Sirui Chen أعاد

Want to achieve extreme performance in motion tracking—and go beyond it? Our preprint tech report is now online, with open-source code available!


Sirui Chen أعاد

🚀 ASAP is now FULLY open-source! 🚀 ✅ Humanoid RL motion tracking & delta actions ✅ Motion retargeting to any humanoid ✅ ASAP Benchmark motions + pretrained policies ✅ Sim2Sim & Sim2Real ready — run ASAP in sim or on your G1 robot! 🔗 github.com/LeCAR-Lab/ASAP


Sirui Chen أعاد

Excited to open-source GMR: General Motion Retargeting. Real-time human-to-humanoid retargeting on your laptop. Supports diverse motion formats & robots. Unlock whole-body humanoid teleoperation (e.g., TWIST). video with 🔊


Sirui Chen أعاد

They say the best time to tweet about your research was 1 year ago, the second best time is now. With RAI formerly known as Boston Dynamics AI Institute, we present DiffuseCloC - the first guidable physics-based diffusion model. diffusecloc.github.io/website/


United States الاتجاهات

Loading...

Something went wrong.


Something went wrong.