This is the ImageNet moment for real robotics. But instead of classifying images, it's about robots acting in our world.
1/ 🚨How well does embodied intelligence perform in the real physical world? 👉 Try it now: [robochallenge.ai] We are thrilled to introduce [@RoboChallengeAI] co-initiated by [@Dexmal_AI] and [@huggingface] The first large scale real-robot-based benchmark platform of…
2/ 🤖The first benchmark hosted on the platform, Table 30, consists of 30 different tasks and 24250 episodes of training data in total. The results of the first 6 baseline VLA algorithms can now be viewed on our website. Website: [robochallenge.ai]
3/ 📡 Advantages - Reproducibility & Fairness : Carefully engineered testing protocol for stable results; - Openness: All test procedures are video-recorded and released; - Diverse tasks. See tech report for detail [robochallenge.ai/robochallenge_…]
4/ ❓Why This Matters? Robots are stepping into the real world, but there's still no common benchmark that's open, unified and reproducible. How can we measure progress—or even fairly compare methods—without a shared standard? We're fixing that.
5/ 💻Real-Robot Benchmark Select a fleet of robots for their popularity. • UR5. A single 6-dof UR5 arm with a Robotiq gripper • Franka Panda. A-7 dof Franka arm • Aloha-like bimanual machine • A 6-dof ARX-5 arm, mouted on a table
6/ 🛡Real-Robot Testing Online → A set of low-level api is formalized to provide the exact timestamp of observations and state of the action queue to enable fine-grained control. → No docker images or model checkpoints are needed to be exchanged.
7/ ⏱Large-Scale Tasklist Table30 → Select 30 tasks for our initial benchmark release to benchmark the VLAs, and common real-robot benchmark only consist of 3-5 tasks. → Demonstration data was collected for each of them (around 1000 episodes per task).
8/ Controlled Tester We decide to control the task preparation by matching visual inputs and call this method controlled tester. In this way, the initial state of the scene and objects is largely fixed across evaluation of different models, making tests scalable.
9/ Scientific Distribution of Tasks (1) by the difficulties encountered by a VLA solution (2) by the type of robot (3) by the intended location of the task scenario (4) by the property of the main target object. 👆It shows good diversity and coverage.
10/ 🤖Base Models and Results → We tested popular open source VLA algorithms. As shown in results, even the most SOTA base model fails to achieve an overall high success rate, So we argue that our benchmark is a “necessity test” in the pursuit of general robotics.
11/ Evaluation & Submission → We’re scaling real-robot evaluation to accelerate embodied AI. → Ready to test your algorithms in the real world? ⬇️ Here’s how:
13/ RoboChallenge is more than a benchmark — it’s a public good for the robotics community. GitHub:github.com/RoboChallenge/… 😍 Whether you’re a robotics veteran or just entering the field, we’re here to support you!
I hope you've found this thread helpful. Follow me @jackcoder0 for more. Like/Repost the quote below if you can:
This is the ImageNet moment for real robotics. But instead of classifying images, it's about robots acting in our world.
First, you have flash loans and borrowing in general to enable random staking sessions rent against historical rewards from staking sessions for a fixed price, for example, to receive cashback during a credit card payment. That's how you get $AAVE. (You can long $ZEC directly…
United States Tendenze
- 1. Dodgers 659K posts
- 2. #WorldSeries 319K posts
- 3. Yamamoto 205K posts
- 4. Blue Jays 145K posts
- 5. Will Smith 47.7K posts
- 6. Miguel Rojas 40.8K posts
- 7. Kershaw 32.4K posts
- 8. Yankees 15.4K posts
- 9. Baseball 160K posts
- 10. Dave Roberts 12.8K posts
- 11. Vladdy 21.7K posts
- 12. Kendrick 16.9K posts
- 13. Ohtani 85.3K posts
- 14. #Worlds2025 27.1K posts
- 15. Jeff Hoffman 3,639 posts
- 16. Auburn 14.3K posts
- 17. Cubs 7,559 posts
- 18. Nike 35.8K posts
- 19. Mets 11.5K posts
- 20. Phillies 3,522 posts
Something went wrong.
Something went wrong.