🚀 Introducing KUDA: Utilizing keypoints as intermediate representation to enable open-vocabulary robotic manipulation! 🤖✨ Our latest research, accepted to #ICRA2025, unifies dynamics learning and visual prompting through keypoints, enabling robots to handle complex tasks with…
Imitation learning is not merely collecting large-scale demonstration data. It requires effective data collection and curation. FSC is a great example of this! Join Lihan’s session and chat with him to learn how to make your policy more general from a data-centric perspective!
The robot neck is COOL! Active perception could be the next big step—by learning where to see, the robot can then learn how to act, unlocking even more impressive capabilities! Congrats!
Your bimanual manipulators might need a Robot Neck 🤖🦒 Introducing Vision in Action: Learning Active Perception from Human Demonstrations ViA learns task-specific, active perceptual strategies—such as searching, tracking, and focusing—directly from human demos, enabling robust…
Will be presenting KUDA at #ICRA2025 today! Looking forward to chatting with old and new friends! 📍 Room 404 (Regular Session WeET16) 📷 May 21 (Wednesday) 5:00 pm–5:05 pm
🚀 Introducing KUDA: Utilizing keypoints as intermediate representation to enable open-vocabulary robotic manipulation! 🤖✨ Our latest research, accepted to #ICRA2025, unifies dynamics learning and visual prompting through keypoints, enabling robots to handle complex tasks with…
Thank you @janusch_patas for highlighting our work! We are advancing visual representations such as Gaussian Splatting to empower robotics! Through building structured world models for deformable objects, our approach creates a neural-based real-to-sim digital twin from…
Learning from videos of humans performing tasks provides valuable semantic and motion data for scaling robot generalists. Translating human actions into robotic capabilities remains an exciting challenge—Humanoid-X and UH1 demonstrate impressive advancements!
Introducing Humanoid-X and UH-1! Hopefully we can scale up humanoid learning with Internet data as soon as possible!
What a day! The community has successfully reproduced this highly accessible tactile sensor developed by @binghao_huang. Step into a new era of multi-modal sensing!
Reproduction has long been a key challenge in hardware-related robotics research. In just **a month** since its release, our scalable tactile sensor has been reproduced and adopted worldwide—from academia to industry—thanks to @binghao_huang and the team's commitment to making…
Huge congratulations to @JiaweiYang118 for winning the NVIDIA Fellowship! Jiawei has a long-term vision and deep, thoughtful insight in his research. Truly well-deserved! 🙌
Huge congratulations to my student @JiaweiYang118 to receive this @NVIDIAAI fellowship! First time a @CSatUSC @USCViterbi PhD receiving such prestigious award. Also, huge congrats to other recipients— you’re amazing! blogs.nvidia.com/blog/graduate-…
Congratulations to @gan_chuang and the team on this phenomenal project! I am very lucky to have witnessed its journey and to have had insightful discussions and received invaluable mentorship from so many of you!
We’re excited to announce the official release of our Genesis Simulator! github.com/Genesis-Embodi… Since 2018, I decided to shift my research focus from vision to embodied AI, driven by a fascination with creating general-purpose agents capable of interacting with the physical…
Congratulations to the team on this outstanding achievement! Thinking back to my sweet early days in Boston as a newcomer to the domain, I am incredibly grateful to many people in this team for shaping my research journey and teaching me how to approach meaningful questions, make…
Everything you love about generative models — now powered by real physics! Announcing the Genesis project — after a 24-month large-scale research collaboration involving over 20 research labs — a generative physics engine able to generate 4D dynamical worlds powered by a physics…
Congratulations to @binghao_huang for making tactile sensing more accessible! Build a high-res tactile sensor in just 30 mins! Please have a try to make your robot capable of multimodal perception!
Want to use tactile sensing but not familiar with hardware? No worries! Just follow the steps, and you’ll have a high-resolution tactile sensor ready in 30 mins! It’s as simple as making a sandwich! 🥪 🎥 YouTube Tutorial: youtube.com/watch?v=8eTpFY… 🛠️ Open Source & Hardware…
United States เทรนด์
- 1. Jonathan Taylor 16K posts
- 2. Falcons 28.1K posts
- 3. Colts 46.1K posts
- 4. Daniel Jones 9,476 posts
- 5. Penix 9,838 posts
- 6. Bijan 6,222 posts
- 7. Mooney 3,797 posts
- 8. Raheem Morris 3,840 posts
- 9. #ForTheShoe 3,461 posts
- 10. Liverpool 178K posts
- 11. Tyler Warren 2,099 posts
- 12. Drake London 4,094 posts
- 13. Max B 20.2K posts
- 14. Doku 53.8K posts
- 15. Konate 18.5K posts
- 16. Marlon Humphrey N/A
- 17. Cole Bishop N/A
- 18. Bucs 5,392 posts
- 19. Go Bills 7,559 posts
- 20. $LMT $450.50 Lockheed F-35 N/A
Something went wrong.
Something went wrong.