alexzhang_robo's profile picture.

Mingtong Zhang

@alexzhang_robo

ปักหมุด

🚀 Introducing KUDA: Utilizing keypoints as intermediate representation to enable open-vocabulary robotic manipulation! 🤖✨ Our latest research, accepted to #ICRA2025, unifies dynamics learning and visual prompting through keypoints, enabling robots to handle complex tasks with…


Imitation learning is not merely collecting large-scale demonstration data. It requires effective data collection and curation. FSC is a great example of this! Join Lihan’s session and chat with him to learn how to make your policy more general from a data-centric perspective!

Join us at two workshops #RSS2025 on 6/21! 📍 Resource Constrained Robotics (RTH109) 🗣️ Oral talk: 11:00–11:15 📍 Continual Robot Learning from Humans (OHE132) 🖼️ Spotlight poster: 10:30–11:00 Come by and chat—we’re excited to share our work!

LihanZha's tweet image. Join us at two workshops #RSS2025 on 6/21!
📍 Resource Constrained Robotics (RTH109)
🗣️ Oral talk: 11:00–11:15

📍 Continual Robot Learning from Humans (OHE132)
🖼️ Spotlight poster: 10:30–11:00

Come by and chat—we’re excited to share our work!


The robot neck is COOL! Active perception could be the next big step—by learning where to see, the robot can then learn how to act, unlocking even more impressive capabilities! Congrats!

Your bimanual manipulators might need a Robot Neck 🤖🦒 Introducing Vision in Action: Learning Active Perception from Human Demonstrations ViA learns task-specific, active perceptual strategies—such as searching, tracking, and focusing—directly from human demos, enabling robust…



Will be presenting KUDA at #ICRA2025 today! Looking forward to chatting with old and new friends! 📍 Room 404 (Regular Session WeET16) 📷 May 21 (Wednesday) 5:00 pm–5:05 pm

🚀 Introducing KUDA: Utilizing keypoints as intermediate representation to enable open-vocabulary robotic manipulation! 🤖✨ Our latest research, accepted to #ICRA2025, unifies dynamics learning and visual prompting through keypoints, enabling robots to handle complex tasks with…



Thank you @janusch_patas for highlighting our work! We are advancing visual representations such as Gaussian Splatting to empower robotics! Through building structured world models for deformable objects, our approach creates a neural-based real-to-sim digital twin from…


Learning from videos of humans performing tasks provides valuable semantic and motion data for scaling robot generalists. Translating human actions into robotic capabilities remains an exciting challenge—Humanoid-X and UH1 demonstrate impressive advancements!

Introducing Humanoid-X and UH-1! Hopefully we can scale up humanoid learning with Internet data as soon as possible!



What a day! The community has successfully reproduced this highly accessible tactile sensor developed by @binghao_huang. Step into a new era of multi-modal sensing!

Reproduction has long been a key challenge in hardware-related robotics research. In just **a month** since its release, our scalable tactile sensor has been reproduced and adopted worldwide—from academia to industry—thanks to @binghao_huang and the team's commitment to making…



Huge congratulations to @JiaweiYang118 for winning the NVIDIA Fellowship! Jiawei has a long-term vision and deep, thoughtful insight in his research. Truly well-deserved! 🙌

Huge congratulations to my student ⁦@JiaweiYang118⁩ to receive this ⁦@NVIDIAAI⁩ fellowship! First time a⁩ ⁦⁦@CSatUSC⁩ ⁦@USCViterbi⁩ PhD receiving such prestigious award. Also, huge congrats to other recipients— you’re amazing! blogs.nvidia.com/blog/graduate-…



Congratulations to @gan_chuang and the team on this phenomenal project! I am very lucky to have witnessed its journey and to have had insightful discussions and received invaluable mentorship from so many of you!

We’re excited to announce the official release of our Genesis Simulator! github.com/Genesis-Embodi… Since 2018, I decided to shift my research focus from vision to embodied AI, driven by a fascination with creating general-purpose agents capable of interacting with the physical…



Congratulations to the team on this outstanding achievement! Thinking back to my sweet early days in Boston as a newcomer to the domain, I am incredibly grateful to many people in this team for shaping my research journey and teaching me how to approach meaningful questions, make…

Everything you love about generative models — now powered by real physics! Announcing the Genesis project — after a 24-month large-scale research collaboration involving over 20 research labs — a generative physics engine able to generate 4D dynamical worlds powered by a physics…



Congratulations to @binghao_huang for making tactile sensing more accessible! Build a high-res tactile sensor in just 30 mins! Please have a try to make your robot capable of multimodal perception!

Want to use tactile sensing but not familiar with hardware? No worries! Just follow the steps, and you’ll have a high-resolution tactile sensor ready in 30 mins! It’s as simple as making a sandwich! 🥪 🎥 YouTube Tutorial: youtube.com/watch?v=8eTpFY… 🛠️ Open Source & Hardware…



United States เทรนด์

Loading...

Something went wrong.


Something went wrong.