Simulation is easy. The real world is hard. We finally have a true benchmark called @RoboChallengeAI for embodied intelligence: real robots.
1/ 🚨How well does embodied intelligence perform in the real physical world? 👉 Try it now: robochallenge.ai We are thrilled to introduce @RoboChallengeAI co-initiated by @Dexmal_AI and @huggingface The first large scale real-robot-based benchmark platform of embodied…
2/ 🤖The first benchmark hosted on the platform, Table 30, consists of 30 different tasks and 24250 episodes of training data in total. The results of the first 6 baseline VLA algorithms can now be viewed on our website. Website: robochallenge.ai
3/ 📡 Advantages • Reproducibility & Fairness : Carefully engineered testing protocol for stable results; • Openness: All test procedures are video-recorded and released; • Diverse tasks. See tech report for detail robochallenge.ai/robochallenge_…
4/ ❓Why This Matters? Robots are stepping into the real world, but there's still no common benchmark that's open, unified and reproducible. How can we measure progress—or even fairly compare methods—without a shared standard? We're fixing that.
5/ 💻Real-Robot Benchmark Select a fleet of robots for their popularity. • UR5. A single 6-dof UR5 arm with a Robotiq gripper • Franka Panda. A-7 dof Franka arm • Aloha-like bimanual machine • A 6-dof ARX-5 arm, mouted on a table
6/ 🛡Real-Robot Testing Online → A set of low-level api is formalized to provide the exact timestamp of observations and state of the action queue to enable fine-grained control. → No docker images or model checkpoints are needed to be exchanged.
7/ ⏱Large-Scale Tasklist Table30 → Select 30 tasks for our initial benchmark release to benchmark the VLAs, and common real-robot benchmark only consist of 3-5 tasks. → Demonstration data was collected for each of them (around 1000 episodes per task).
8/ Controlled Tester We decide to control the task preparation by matching visual inputs and call this method controlled tester. In this way, the initial state of the scene and objects is largely fixed across evaluation of different models, making tests scalable.
9/ Scientific Distribution of Tasks (1) by the difficulties encountered by a VLA solution (2) by the type of robot (3) by the intended location of the task scenario (4) by the property of the main target object. 👆It shows good diversity and coverage.
10/ 🤖Base Models and Results → We tested popular open source VLA algorithms. As shown in results, even the most SOTA base model fails to achieve an overall high success rate, So we argue that our benchmark is a “necessity test” in the pursuit of general robotics.
11/ Evaluation & Submission → We’re scaling real-robot evaluation to accelerate embodied AI. → Ready to test your algorithms in the real world? ⬇️ Here’s how:
13/ RoboChallenge is more than a benchmark — it’s a public good for the robotics community. GitHub:github.com/RoboChallenge/… 😍 Whether you’re a robotics veteran or just entering the field, we’re here to support you!
I hope you've found this thread helpful. Follow me @jamescoder12 for more. Like/Repost the quote below if you can:
Simulation is easy. The real world is hard. We finally have a true benchmark called @RoboChallengeAI for embodied intelligence: real robots.
United States Tendenze
- 1. New York 1.3M posts
- 2. New York 1.3M posts
- 3. Virginia 553K posts
- 4. #RadioStatic 5,886 posts
- 5. #Talus_Labs 1,008 posts
- 6. Alastor 18.1K posts
- 7. Prop 50 194K posts
- 8. Van Jones 2,868 posts
- 9. #DWTS 41.7K posts
- 10. #XLOV_UXLXVE 10.7K posts
- 11. TURN THE VOLUME UP 26K posts
- 12. AND SO IT BEGINS 10.8K posts
- 13. #QuestPit 7,540 posts
- 14. Ty Lue 1,008 posts
- 15. Jay Jones 108K posts
- 16. Clippers 9,776 posts
- 17. Cyrene 119K posts
- 18. Harden 10.1K posts
- 19. Sanford N/A
- 20. WOKE IS BACK 42.6K posts
Something went wrong.
Something went wrong.