Simulation is easy. The real world is hard. We finally have a true benchmark called @RoboChallengeAI for embodied intelligence: real robots.
1/ 🚨How well does embodied intelligence perform in the real physical world? 👉 Try it now: robochallenge.ai We are thrilled to introduce @RoboChallengeAI co-initiated by @Dexmal_AI and @huggingface The first large scale real-robot-based benchmark platform of embodied…
2/ 🤖The first benchmark hosted on the platform, Table 30, consists of 30 different tasks and 24250 episodes of training data in total. The results of the first 6 baseline VLA algorithms can now be viewed on our website. Website: robochallenge.ai
3/ 📡 Advantages • Reproducibility & Fairness : Carefully engineered testing protocol for stable results; • Openness: All test procedures are video-recorded and released; • Diverse tasks. See tech report for detail robochallenge.ai/robochallenge_…
4/ ❓Why This Matters? Robots are stepping into the real world, but there's still no common benchmark that's open, unified and reproducible. How can we measure progress—or even fairly compare methods—without a shared standard? We're fixing that.
5/ 💻Real-Robot Benchmark Select a fleet of robots for their popularity. • UR5. A single 6-dof UR5 arm with a Robotiq gripper • Franka Panda. A-7 dof Franka arm • Aloha-like bimanual machine • A 6-dof ARX-5 arm, mouted on a table
6/ 🛡Real-Robot Testing Online → A set of low-level api is formalized to provide the exact timestamp of observations and state of the action queue to enable fine-grained control. → No docker images or model checkpoints are needed to be exchanged.
7/ ⏱Large-Scale Tasklist Table30 → Select 30 tasks for our initial benchmark release to benchmark the VLAs, and common real-robot benchmark only consist of 3-5 tasks. → Demonstration data was collected for each of them (around 1000 episodes per task).
8/ Controlled Tester We decide to control the task preparation by matching visual inputs and call this method controlled tester. In this way, the initial state of the scene and objects is largely fixed across evaluation of different models, making tests scalable.
9/ Scientific Distribution of Tasks (1) by the difficulties encountered by a VLA solution (2) by the type of robot (3) by the intended location of the task scenario (4) by the property of the main target object. 👆It shows good diversity and coverage.
10/ 🤖Base Models and Results → We tested popular open source VLA algorithms. As shown in results, even the most SOTA base model fails to achieve an overall high success rate, So we argue that our benchmark is a “necessity test” in the pursuit of general robotics.
11/ Evaluation & Submission → We’re scaling real-robot evaluation to accelerate embodied AI. → Ready to test your algorithms in the real world? ⬇️ Here’s how:
13/ RoboChallenge is more than a benchmark — it’s a public good for the robotics community. GitHub:github.com/RoboChallenge/… 😍 Whether you’re a robotics veteran or just entering the field, we’re here to support you!
I hope you've found this thread helpful. Follow me @jamescoder12 for more. Like/Repost the quote below if you can:
Simulation is easy. The real world is hard. We finally have a true benchmark called @RoboChallengeAI for embodied intelligence: real robots.
Tackle tough beards with ease! Our Dual Ring Shaver offers unmatched power and a sleek result. Grab yours 👉eliaens.com/products/porta…
United States Xu hướng
- 1. New York 1.71M posts
- 2. Hato 12.4K posts
- 3. Trench 5,279 posts
- 4. Lina Khan 3,187 posts
- 5. #questpit 24.8K posts
- 6. Estevao 12.6K posts
- 7. Supreme Court 132K posts
- 8. Gorsuch 5,467 posts
- 9. Godzilla 20.4K posts
- 10. Neal Katyal 2,534 posts
- 11. IEEPA 3,739 posts
- 12. Lavia 6,657 posts
- 13. Van Jones 10.4K posts
- 14. Death Grips 1,997 posts
- 15. Alastor 56.6K posts
- 16. Blizzcon 1,273 posts
- 17. 5th of November 25.5K posts
- 18. Qarabag 25K posts
- 19. Sauer 5,632 posts
- 20. Jacob Frey 4,286 posts
Something went wrong.
Something went wrong.