Sirui Chen
@eric_srchen
PhD in Stanford CS, Prev Undergrad at HKU. Interested in robotics
How can we collect high-quality robot data without teleoperation? AR can help! Introducing ARCap, a fully open-sourced AR solution for collecting cross-embodiment robot data (gripper and dex hand) directly using human hands. 🌐:stanford-tml.github.io/ARCap/ 📜:arxiv.org/abs/2410.08464
Collecting data is frastrating, let robot collect its own data!
Zero teleoperation. Zero real-world data. ➔ Autonomous humanoid loco-manipulation in reality. Introducing VIRAL: Visual Sim-to-Real at Scale. We achieved 54 autonomous cycles (walk, stand, place, pick, turn) using a simple recipe: 1. RL 2. Simulation 3. GPUs Website:…
Introducing Gallant: Voxel Grid-based Humanoid Locomotion and Local-navigation across 3D Constrained Terrains 🤖 Project page: gallantloco.github.io Arxiv: arxiv.org/abs/2511.14625 Gallant is, to our knowledge, the first system to run a single policy that handles full-space…
🕸️ Introducing SPIDER — Scalable Physics-Informed Dexterous Retargeting! A dynamically feasible, cross-embodiment retargeting framework for BOTH humanoids 🤖 and dexterous hands ✋. From human motion → sim → real robots, at scale. 🔗 Website: jc-bao.github.io/spider-project/ 🧵 1/n
Checkout our latest work: supersizing motion tracking for natural humanoid control. Single unified control policy. Different functionalities: Kinematic Planner Full-Body Teleop (VR, Video) Keypoints teleop (VR) Text2Robot Music2Robot VLA Project: nvlabs.github.io/SONIC/
Excited to present GentleHumanoid: a whole-body control policy with upper-body compliance and tunable force limits for safe, natural human & object interaction. ⚡ONE policy for diverse tasks and compliance levels. 👉Website: gentle-humanoid.axell.top
How do you give a humanoid the general motion capability? Not just single motions, but all motion? Introducing SONIC, our new work on supersizing motion tracking for natural humanoid control. We argue that motion tracking is the scalable foundation task for humanoids. So we…
Checkout our SONIC, one policy to rule them all!
Humanoids need a single, generalist control policy for all of their physical tasks, not a new one for every new chore or demo. A policy for walking can't dance. A policy for dancing can't support mowing the lawn. We need to scale up humanoid control for diverse behaviors, just…
Unlock whole body precise teleop with portable and affordable device!
Excited to introduce TWIST2, our next-generation humanoid data collection system. TWIST2 is portable (use anywhere, no MoCap), scalable (100+ demos in 15 mins), and holistic (unlock major whole-body human skills). Fully open-sourced: yanjieze.com/TWIST2
It was a great pleasure to host @yukez to give a @CMU_Robotics seminar talk! Link (including a very insightful 25-min Q&A session): youtu.be/49LnlfM9DBU?si… Definitely check it out if you are interested in how to build generalist humanoid, robot learning, and data pyramid!
youtube.com
YouTube
RI Seminar: Yuke Zhu : Toward Generalist Humanoid Robots
What if robots could improve themselves by learning from their own failures in the real-world? Introducing 𝗣𝗟𝗗 (𝗣𝗿𝗼𝗯𝗲, 𝗟𝗲𝗮𝗿𝗻, 𝗗𝗶𝘀𝘁𝗶𝗹𝗹) — a recipe that enables Vision-Language-Action (VLA) models to self-improve for high-precision manipulation tasks. PLD…
Spot is playing Ping Pong! Spin is a crucial part of the game, but few robots can handle it. We show receiving and generating significant spin using MPC. Collaboration with David Nguyen and Zulfiqar Zaidi! Video: youtu.be/3GrnkxOeC14?si…. Paper: arxiv.org/pdf/2510.08754.
youtube.com
YouTube
Whole Body Model Predictive Control for Spin-Aware Quadrupedal Table...
Wild! Great work in retargeting!
Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing 𝗢𝗺𝗻𝗶𝗥𝗲𝘁𝗮𝗿𝗴𝗲𝘁🎯, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with 𝗺𝗶𝗻𝗶𝗺𝗮𝗹 RL: - 5 rewards, - 4 DR…
If you missed @yukez’s talk at #CoRL2025 here is the link youtube.com/watch?v=rh2oxU… 👇demo we at GEAR have been cranking at: fully autonomous, human like, locomanipulation via language + vision input. Uncut. Sleepless nights to get the humanoid to move naturally pays off🥹
Excited to share our latest work MaskedManipulator (proc. @SIGGRAPHAsia 2025)! With: Yifeng Jiang, @erwincoumans, @zhengyiluo, @GalChechik, and @xbpeng4
🏓🤖 Our humanoid robot can now rally over 100 consecutive shots against a human in real table tennis — fully autonomous, sub-second reaction, human-like strikes.
ToddlerBot is accepted to CoRL, and we will bring Toddy (2.0 version) to Seoul for a trip. Come and say hi to Toddy if you're around😁! Our arxiv paper is also updated with more technical details in the appendix: arxiv.org/abs/2502.00893
Delivering the robot close enough to a target is an important yet often overlooked prerequisite for any meaningful robot interaction. It requires robust locomotion, navigation, and reaching all at once. HeAD is an automatic vision-based system that handles all of them.
Introducing HEAD🤖, an autonomous navigation and reaching system for humanoid robots, which allows the robot to navigate around obstacles and touch an object in the environment. More details on our website and CoRL paper: stanford-tml.github.io/HEAD
Every second eight new posts hit my screens. So many that we can't think about what it all means. What does it mean? Robot learning is speeding up. Stay alive, we are about to see wonderous things.
Introducing HEAD🤖, an autonomous navigation and reaching system for humanoid robots, which allows the robot to navigate around obstacles and touch an object in the environment. More details on our website and CoRL paper: stanford-tml.github.io/HEAD
excellent work on whole-body reaching, with an intuitive modular approach! also, great to see my former labmate @yufei_ye collecting data in-the-wild with Aria glasses 🙂
Introducing HEAD🤖, an autonomous navigation and reaching system for humanoid robots, which allows the robot to navigate around obstacles and touch an object in the environment. More details on our website and CoRL paper: stanford-tml.github.io/HEAD
How do we learn motor skills directly in the real world? Think about learning to ride a bike—parents might be there to give you hands-on guidance.🚲 Can we apply this same idea to robots? Introducing Robot-Trains-Robot (RTR): a new framework for real-world humanoid learning.
United States Trends
- 1. Sedition 186K posts
- 2. Cheney 91.7K posts
- 3. Lamelo 7,823 posts
- 4. Texans 20.5K posts
- 5. Seditious 103K posts
- 6. Constitution 119K posts
- 7. Commander in Chief 56.3K posts
- 8. Coast Guard 25.6K posts
- 9. TMNT 4,734 posts
- 10. Seager 1,399 posts
- 11. First Take 47.2K posts
- 12. UNLAWFUL 81.5K posts
- 13. UCMJ 10.6K posts
- 14. The Last Ronin 2,176 posts
- 15. Jeanie 2,129 posts
- 16. Trump and Vance 37.1K posts
- 17. #drwfirstgoal N/A
- 18. Cam Newton 4,821 posts
- 19. Toub 1,893 posts
- 20. Dizzy 7,409 posts
Something went wrong.
Something went wrong.