joonkyu_min's profile picture. robot learning, robot safety | Undergrad @ SNU, Incoming MS student @ KAIST CLVR Lab

Joonkyu Min

@joonkyu_min

robot learning, robot safety | Undergrad @ SNU, Incoming MS student @ KAIST CLVR Lab

Joonkyu Min 님이 재게시함

Ever want to enjoy all the privileged information in sim while seamlessly transferring to the real world? How can we correct policy mistakes after deployment? 👉Introducing GSWorld, a real2sim2real photo-realistic simulator with interaction physics with fully open-sourced code.


Joonkyu Min 님이 재게시함

To push self-driving into situations wilder than reality, we built a neural network world simulator that can create entirely synthetic worlds for the Tesla to drive in. Video below is fully generated & not a real video


Joonkyu Min 님이 재게시함

Punchline: World models == VQA (about the future)! Planning with world models can be powerful for robotics/control. But most world models are video generators trained to predict everything, including irrelevant pixels and distractions. We ask - what if a world model only…


Joonkyu Min 님이 재게시함

🚀 Introducing SoftMimic: Compliant humanoids for an interactive future — bringing humanoids into the real world 🤝 🔗gmargo11.github.io/softmimic/ Current humanoids collapse when they touch the world — they can’t handle contact or deviation from their reference motion 😭 😎SoftMimic…


Joonkyu Min 님이 재게시함

Simulation drives robotics progress, but how do we close the reality gap? Introducing GaussGym: an open-source framework for learning locomotion from pixels with ultra-fast parallelized photorealistic rendering across >4,000 iPhone, GrandTour, ARKit, and Veo scenes! Thread 🧵


I’ll be presenting our work “CF3: Compact and Fast 3D Feature Fields” on the last day of @ICCVConference, together with Hyunjoon Lee! If you’re attending ICCV, feel free to drop by on Thursday, Oct 23 — I’d love to chat about anything 3D or robotics as well 👋 🖥️ Demo Session 5…


Joonkyu Min 님이 재게시함

For those wondering, Unitree H2 neck has 2DOF actuation #IROS2025


Joonkyu Min 님이 재게시함

Should robots have eyeballs? Human eyes move constantly and use variable resolution to actively gather visual details. In EyeRobot (eyerobot.net) we train a robot eyeball entirely with RL: eye movements emerge from experience driven by task-driven rewards.


Joonkyu Min 님이 재게시함

I'd even go as far and say that quality doesn't matter either, because we don't yet know what is "good" data in the long run. The reason that Scale etc. make sense for CV/NLP is that better data -> better performance on some tasks of interest (better AP for 3D detection on…

Every week, another batch of robot data startups appears, selling payload dirt in a gold rush. But the market dynamics that created Scale and Surge don’t exist yet in robotics. Most naive robot trajectories are literally useless, quality is far more important than quantity.



Joonkyu Min 님이 재게시함

PSA for the robotics community: Stop labeling affordances or distilling them from VLMs. Extract affordances from bimanual human videos instead! Excited to share 2HandedAfforder: Learning Precise Actionable Bimanual Affordances from Human Videos, accepted at #ICCV2025! 🎉 🧵1/5


Joonkyu Min 님이 재게시함

Introducing DEAS, a scalable offline RL framework utilizing action sequences with stable value learning. 💪🏼 SOTA performance in complex tasks in OGBench. 😳 DEAS can be used to improve VLA in both simulation and real-world tasks. 🤗 Code and datasets are all open-sourced!


Joonkyu Min 님이 재게시함

What's the right architecture for a VLA? VLM + custom action heads (π₀)? VLM with special discrete action tokens (OpenVLA)? Custom design on top of the VLM (OpenVLA-OFT)? Or... VLM with ZERO modifications? Just predict action as text. The results will surprise you. VLA-0:…


Joonkyu Min 님이 재게시함

Introducing RL-100: Performant Robotic Manipulation with Real-World Reinforcement Learning. lei-kun.github.io/RL-100/ 7 real robot tasks, 900/900 successes. Up to 250 consecutive trials in one task, running 2 hours nonstop without failure. High success rate against physical…


Joonkyu Min 님이 재게시함

How can we help *any* image-input policy generalize better? 👉 Meet PEEK 🤖 — a framework that uses VLMs to decide *where* to look and *what* to do, so downstream policies — from ACT, 3D-DA, or even π₀ — generalize more effectively! 🧵


Joonkyu Min 님이 재게시함

🚀 Introducing N2M! In mobile manipulation, the performance of a manipulation policy is very sensitive to the robot’s initial pose. N2M guides the robot to a suitable pose for executing the manipulation policy. N2M comes with 5 key features - Check them out in the posts below!


Joonkyu Min 님이 재게시함

I'm super excited to announce mjlab today! mjlab = Isaac Lab's APIs + best-in-class MuJoCo physics + massively parallel GPU acceleration Built directly on MuJoCo Warp with the abstractions you love.


Joonkyu Min 님이 재게시함

Come see what robot learning can do for surgical automation! We’re excited to host the first Workshop on Automating Robotic Surgery with an amazing lineup of speakers. 🗓️ Sept. 27 09:30AM - 12:30PM 📍 Floor 3F, Room E7 🌐 …ng-robotic-surgery-workshop.github.io #CoRL2025 #CoRL @corl_conf

MinhoHeo's tweet image. Come see what robot learning can do for surgical automation!
We’re excited to host the first Workshop on Automating Robotic Surgery with an amazing lineup of speakers.

🗓️ Sept. 27 09:30AM - 12:30PM
📍 Floor 3F, Room E7
🌐 …ng-robotic-surgery-workshop.github.io

#CoRL2025 #CoRL @corl_conf

United States 트렌드

Loading...

Something went wrong.


Something went wrong.