Joonkyu Min
@joonkyu_min
robot learning, robot safety | Undergrad @ SNU, Incoming MS student @ KAIST CLVR Lab
Ever want to enjoy all the privileged information in sim while seamlessly transferring to the real world? How can we correct policy mistakes after deployment? 👉Introducing GSWorld, a real2sim2real photo-realistic simulator with interaction physics with fully open-sourced code.
To push self-driving into situations wilder than reality, we built a neural network world simulator that can create entirely synthetic worlds for the Tesla to drive in. Video below is fully generated & not a real video
Punchline: World models == VQA (about the future)! Planning with world models can be powerful for robotics/control. But most world models are video generators trained to predict everything, including irrelevant pixels and distractions. We ask - what if a world model only…
🚀 Introducing SoftMimic: Compliant humanoids for an interactive future — bringing humanoids into the real world 🤝 🔗gmargo11.github.io/softmimic/ Current humanoids collapse when they touch the world — they can’t handle contact or deviation from their reference motion 😭 😎SoftMimic…
Simulation drives robotics progress, but how do we close the reality gap? Introducing GaussGym: an open-source framework for learning locomotion from pixels with ultra-fast parallelized photorealistic rendering across >4,000 iPhone, GrandTour, ARKit, and Veo scenes! Thread 🧵
I’ll be presenting our work “CF3: Compact and Fast 3D Feature Fields” on the last day of @ICCVConference, together with Hyunjoon Lee! If you’re attending ICCV, feel free to drop by on Thursday, Oct 23 — I’d love to chat about anything 3D or robotics as well 👋 🖥️ Demo Session 5…
For those wondering, Unitree H2 neck has 2DOF actuation #IROS2025
Should robots have eyeballs? Human eyes move constantly and use variable resolution to actively gather visual details. In EyeRobot (eyerobot.net) we train a robot eyeball entirely with RL: eye movements emerge from experience driven by task-driven rewards.
I'd even go as far and say that quality doesn't matter either, because we don't yet know what is "good" data in the long run. The reason that Scale etc. make sense for CV/NLP is that better data -> better performance on some tasks of interest (better AP for 3D detection on…
Every week, another batch of robot data startups appears, selling payload dirt in a gold rush. But the market dynamics that created Scale and Surge don’t exist yet in robotics. Most naive robot trajectories are literally useless, quality is far more important than quantity.
PSA for the robotics community: Stop labeling affordances or distilling them from VLMs. Extract affordances from bimanual human videos instead! Excited to share 2HandedAfforder: Learning Precise Actionable Bimanual Affordances from Human Videos, accepted at #ICCV2025! 🎉 🧵1/5
Introducing DEAS, a scalable offline RL framework utilizing action sequences with stable value learning. 💪🏼 SOTA performance in complex tasks in OGBench. 😳 DEAS can be used to improve VLA in both simulation and real-world tasks. 🤗 Code and datasets are all open-sourced!
What's the right architecture for a VLA? VLM + custom action heads (π₀)? VLM with special discrete action tokens (OpenVLA)? Custom design on top of the VLM (OpenVLA-OFT)? Or... VLM with ZERO modifications? Just predict action as text. The results will surprise you. VLA-0:…
Introducing RL-100: Performant Robotic Manipulation with Real-World Reinforcement Learning. lei-kun.github.io/RL-100/ 7 real robot tasks, 900/900 successes. Up to 250 consecutive trials in one task, running 2 hours nonstop without failure. High success rate against physical…
How can we help *any* image-input policy generalize better? 👉 Meet PEEK 🤖 — a framework that uses VLMs to decide *where* to look and *what* to do, so downstream policies — from ACT, 3D-DA, or even π₀ — generalize more effectively! 🧵
🚀 Introducing N2M! In mobile manipulation, the performance of a manipulation policy is very sensitive to the robot’s initial pose. N2M guides the robot to a suitable pose for executing the manipulation policy. N2M comes with 5 key features - Check them out in the posts below!
I'm super excited to announce mjlab today! mjlab = Isaac Lab's APIs + best-in-class MuJoCo physics + massively parallel GPU acceleration Built directly on MuJoCo Warp with the abstractions you love.
Come see what robot learning can do for surgical automation! We’re excited to host the first Workshop on Automating Robotic Surgery with an amazing lineup of speakers. 🗓️ Sept. 27 09:30AM - 12:30PM 📍 Floor 3F, Room E7 🌐 …ng-robotic-surgery-workshop.github.io #CoRL2025 #CoRL @corl_conf
United States 트렌드
- 1. #WorldSeries 213K posts
- 2. Dodgers 265K posts
- 3. Freddie 99.8K posts
- 4. Klein 211K posts
- 5. Ohtani 140K posts
- 6. Good Tuesday 21.8K posts
- 7. Kershaw 20.2K posts
- 8. Mookie 15.6K posts
- 9. 2-12% River Pts N/A
- 10. #Worlds2025 10.4K posts
- 11. Yamamoto 30.2K posts
- 12. USS George Washington 17.6K posts
- 13. Wikipedia 63.5K posts
- 14. Grokipedia 81.1K posts
- 15. Victory 156K posts
- 16. Lauer 5,265 posts
- 17. WHAT A GAME 42.8K posts
- 18. Dave Roberts 6,382 posts
- 19. 18 INNINGS 15.7K posts
- 20. Joe Davis 2,341 posts
Something went wrong.
Something went wrong.