ZhongyuLi4's profile picture. PhD student doing robotics@UC Berkeley. Randomly post robot & cat things here.

Zhongyu Li

@ZhongyuLi4

PhD student doing robotics@UC Berkeley. Randomly post robot & cat things here.

Pinned

Interested in making your bipedal robots to be athletes? We summarized our RL work to create robust & adaptive controllers for general bipedal skills. 400m-dash, running over terrains/against perturbations, targeted jumping, compliant walking, not a problem for bipeds now.🧵👇


Zhongyu Li reposted

It was a joy bringing Jason’s signature spin-kick to life on the @UnitreeRobotics G1. We trained it in mjlab with the BeyondMimic recipe but had issues on hardware last night (the IMU gyro was saturating). One more sim-tuning pass and we nailed it today. With @qiayuanliao and…

Implementing motion imitation methods involves lots of nuisances. Not many codebases get all the details right. So, we're excited to release MimicKit! github.com/xbpeng/MimicKit A framework with high quality implementations of our methods: DeepMimic, AMP, ASE, ADD, and more to come!



Zhongyu Li reposted

Training RL agents often requires tedious reward engineering. ADD can help! ADD uses a differential discriminator to automatically turn raw errors into effective training rewards for a wide variety of tasks! 🚀 Excited to share our latest work: Physics-Based Motion Imitation…


Zhongyu Li reposted

Implementing motion imitation methods involves lots of nuisances. Not many codebases get all the details right. So, we're excited to release MimicKit! github.com/xbpeng/MimicKit A framework with high quality implementations of our methods: DeepMimic, AMP, ASE, ADD, and more to come!


Zhongyu Li reposted

Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing 𝗢𝗺𝗻𝗶𝗥𝗲𝘁𝗮𝗿𝗴𝗲𝘁🎯, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with 𝗺𝗶𝗻𝗶𝗺𝗮𝗹 RL: - 5 rewards, - 4 DR…


Zhongyu Li reposted

This is how the generated terrains were laid out for training the motion tracker in PARC with Isaac Gym 😱. It was good enough for the scope of the paper but it could definitely be much more compact with a bit of engineering effort!


Zhongyu Li reposted

Mood


Zhongyu Li reposted

@kevin_zakka dropping some high quality software as usual! I've been trying to pick a framework recently for some upcoming projects and this just made my decision a lot harder - so much new activity in this space! Here is a (simplified) overview of the options:

pthangeda_'s tweet image. @kevin_zakka dropping some high quality software as usual!

I've been trying to pick a framework recently for some upcoming projects and this just made my decision a lot harder - so much new activity in this space! 

Here is a (simplified) overview of the options:

I'm super excited to announce mjlab today! mjlab = Isaac Lab's APIs + best-in-class MuJoCo physics + massively parallel GPU acceleration Built directly on MuJoCo Warp with the abstractions you love.



Zhongyu Li reposted

I'm super excited to announce mjlab today! mjlab = Isaac Lab's APIs + best-in-class MuJoCo physics + massively parallel GPU acceleration Built directly on MuJoCo Warp with the abstractions you love.


Love this feature!!!

Of course mjlab supports the native MuJoCo viewer. Makes it a breeze to pause, slow down, inspect contacts, perturb the robot, etc. There's also a brand new pane for reward visualization :)



Zhongyu Li reposted

We just did World’s first on-stage autonomous demo of long-horizon dexterous VLA 🚨 No training. No setup. Performance out of the box. Live demo is hard and unpredictable, but we felt great about our model’s generalization, and it went pretty well! 💯 Zero-shot. 100% success.


Zhongyu Li reposted

Meet mjlab. Powered by MuJoCo Warp. Drops Monday.


Zhongyu Li reposted

Angry 😡

AuraWithCat's tweet image. Angry 😡

Zhongyu Li reposted

COW

ShouldHaveCat's tweet image. COW

Zhongyu Li reposted

Yesterday marked @UWaterloo's first robot learning reading group for fall 2025, and it was a great success! This week focused on robot foundation models, covering Pi0 by @physical_int and LBM by @ToyotaResearch. Shoutout to @djkesu1 for helping cohost, and @palatialXR for…


Zhongyu Li reposted

Join us TODAY for the return of the GRASP SFI Seminar series for Fall 2025 Semester! Please welcome Tairan He who will be presenting "“Scalable Sim-to-Real Learning for General-Purpose Humanoid Skills” from 3PM-4PM. More info: grasp.upenn.edu/events/fall-20… #GRASP #GRASPLab #GRASPSFI

GRASPlab's tweet image. Join us TODAY for the return of the GRASP SFI Seminar series for Fall 2025 Semester! Please welcome Tairan He who will be presenting "“Scalable Sim-to-Real Learning for General-Purpose Humanoid Skills” from 3PM-4PM.
More info:
grasp.upenn.edu/events/fall-20…
#GRASP #GRASPLab #GRASPSFI

Amazing to see how fast the open-source humanoid (and Berkeley Humanoid Lite) community is expanding🤩!!!

✨ Berkeley Humanoid Lite v1.1.0 ✨ Six months since our first release, we’re excited to share an update — but more importantly, to celebrate the community that has grown around this project. From thoughtful Q&As to sharing build progress, contributions from the community…

t_k_233's tweet image. ✨ Berkeley Humanoid Lite v1.1.0 ✨
Six months since our first release, we’re excited to share an update — but more importantly, to celebrate the community that has grown around this project.

From thoughtful Q&As to sharing build progress, contributions from the community…


🤯🤯🤯🤯🤯

🏓🤖 Our humanoid robot can now rally over 100 consecutive shots against a human in real table tennis — fully autonomous, sub-second reaction, human-like strikes.



Zhongyu Li reposted

This is just the beginning! 🌟 At @DexmateAI , we're not just building robots - we're creating intelligent partners that work alongside humans to solve real-world challenges. Proud to be part of the @NVIDIARobotics ecosystem driving this transformation.

Have you seen moves like that? 👀 @DexmateAI is developing a general-purpose humanoid robot with incredible agility. Leveraging #NVIDIARobotics tech, it’s helping in manufacturing, retail, and logistics. 🤖 Learn more 👉 nvda.ws/3HCDcFx



🤯

Want to achieve extreme performance in motion tracking—and go beyond it? Our preprint tech report is now online, with open-source code available!



Zhongyu Li reposted

They say the best time to tweet about your research was 1 year ago, the second best time is now. With RAI formerly known as Boston Dynamics AI Institute, we present DiffuseCloC - the first guidable physics-based diffusion model. diffusecloc.github.io/website/


Loading...

Something went wrong.


Something went wrong.