ZhongyuLi4's profile picture. Assist. Prof@CUHK, PhD@UC Berkeley. Doing dynamic robotics + AI. Randomly post robot & cat things here.

Zhongyu Li

@ZhongyuLi4

Assist. Prof@CUHK, PhD@UC Berkeley. Doing dynamic robotics + AI. Randomly post robot & cat things here.

置頂

Excited to share that I’ve recently joined the Chinese University of Hong Kong (CUHK) as an Assistant Professor in Mechanical and Automation Engineering! My research will continue to focus on embodied AI & humanoid robotics — legged locomotion, whole-body and dexterous…


Zhongyu Li 已轉發

Cotton ball


Zhongyu Li 已轉發

I've been working on deformable object manipulation since my PhD. It was totally a nightmare years ago and my PhD advisor was telling me not to work on it for my own good. Today, at ByteDance Seed, we are dropping GR-RL, a new VLA+RL system that manages long-horizon precise…


Zhongyu Li 已轉發

MimicKit now supports #IsaacLab! After many years with IsaacGym, it's time to upgrade. MimicKit has a simple Engine API that allows you to easily swap between different simulator backends. Which simulator would you like to see next?


Zhongyu Li 已轉發

November 19


Zhongyu Li 已轉發

mjlab now supports explicit actuators with custom torque computation in Python/PyTorch. This includes DC motor models with realistic torque-speed curves and learned actuator networks: github.com/mujocolab/mjla…


Zhongyu Li 已轉發

Pixels in, contacts out... Perception, interaction, autonomy - next agenda for humanoids. We learn a multi-task humanoid world model from offline datasets and use MPC to plan contact-aware behaviors from ego-vision in the real-world. Project and Code: ego-vcp.github.io


Zhongyu Li 已轉發

MimicKit now has support for motion retargeting with GMR. We also released a bunch of parkour motions recorded from a professional athlete, used in ADD and PARC. Anyone brave enough to deploy a double kong on a G1? 😉


Zhongyu Li 已轉發

Ever wanted to simulate an entire house in MuJoCo or a very cluttered kitchen? Well now you can with the newly introduced sleeping islands: groups of stationary bodies that drop out of the physics pipeline until disturbed. Check out Yuval's amazing video and documentation 👇


Unlimited scenario for dexterous manipulation for unlimited data🤩

😮‍💨🤖💥 Tired of building dexterous tasks by hand, collecting data forever, and still fighting with building the simulator environment? Meet GenDexHand — a generative pipeline that creates dex-hand tasks, refines scenes, and learns to solve them automatically. No hand-crafted…

Winniechen02's tweet image. 😮‍💨🤖💥 Tired of building dexterous tasks by hand, collecting data forever, and still fighting with building the simulator environment?
Meet GenDexHand — a generative pipeline that creates dex-hand tasks, refines scenes, and learns to solve them automatically.
No hand-crafted…


Zhongyu Li 已轉發

Excited to share our new work on making VLAs omnimodal — condition on multiple different modalities (one at a time or all at once)! It allows us to train on more data than any single-modality model, and outperforms any such model: more modalities = more data = better models! 🚀…

We trained OmniVLA, a robotic foundation model for navigation conditioned on language, goal poses, and images. Initialized with OpenVLA, it leverages Internet-scale knowledge for strong OOD performance. Great collaboration with @CatGlossop, @shahdhruv_, and @svlevine.



Zhongyu Li 已轉發

We open-sourced the full pipeline! Data conversion from MimicKit, training recipe, pretrained checkpoint, and deployment instructions. Train your own spin kick with mjlab: github.com/mujocolab/g1_s…


Zhongyu Li 已轉發

Amazing results! Such motion tracking policies can be trivially trained using our open-source code: github.com/HybridRobotics…

Unitree G1 Kungfu Kid V6.0 A year and a half as a trainee — I'll keep working hard! Hope to earn more of your love🥰



Zhongyu Li 已轉發

It was a joy bringing Jason’s signature spin-kick to life on the @UnitreeRobotics G1. We trained it in mjlab with the BeyondMimic recipe but had issues on hardware last night (the IMU gyro was saturating). One more sim-tuning pass and we nailed it today. With @qiayuanliao and…

Implementing motion imitation methods involves lots of nuisances. Not many codebases get all the details right. So, we're excited to release MimicKit! github.com/xbpeng/MimicKit A framework with high quality implementations of our methods: DeepMimic, AMP, ASE, ADD, and more to come!



Zhongyu Li 已轉發

Training RL agents often requires tedious reward engineering. ADD can help! ADD uses a differential discriminator to automatically turn raw errors into effective training rewards for a wide variety of tasks! 🚀 Excited to share our latest work: Physics-Based Motion Imitation…


Zhongyu Li 已轉發

Implementing motion imitation methods involves lots of nuisances. Not many codebases get all the details right. So, we're excited to release MimicKit! github.com/xbpeng/MimicKit A framework with high quality implementations of our methods: DeepMimic, AMP, ASE, ADD, and more to come!


Zhongyu Li 已轉發

Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing 𝗢𝗺𝗻𝗶𝗥𝗲𝘁𝗮𝗿𝗴𝗲𝘁🎯, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with 𝗺𝗶𝗻𝗶𝗺𝗮𝗹 RL: - 5 rewards, - 4 DR…


Zhongyu Li 已轉發

This is how the generated terrains were laid out for training the motion tracker in PARC with Isaac Gym 😱. It was good enough for the scope of the paper but it could definitely be much more compact with a bit of engineering effort!


Zhongyu Li 已轉發

Mood


Zhongyu Li 已轉發

@kevin_zakka dropping some high quality software as usual! I've been trying to pick a framework recently for some upcoming projects and this just made my decision a lot harder - so much new activity in this space! Here is a (simplified) overview of the options:

pthangeda_'s tweet image. @kevin_zakka dropping some high quality software as usual!

I've been trying to pick a framework recently for some upcoming projects and this just made my decision a lot harder - so much new activity in this space! 

Here is a (simplified) overview of the options:

I'm super excited to announce mjlab today! mjlab = Isaac Lab's APIs + best-in-class MuJoCo physics + massively parallel GPU acceleration Built directly on MuJoCo Warp with the abstractions you love.



Loading...

Something went wrong.


Something went wrong.