joonkyu_min's profile picture. robot learning, robot safety | Undergrad @ SNU, Incoming MS student @ KAIST CLVR Lab

Joonkyu Min

@joonkyu_min

robot learning, robot safety | Undergrad @ SNU, Incoming MS student @ KAIST CLVR Lab

Joonkyu Min repostou

Meet Anny. One model. Every body. A new human model that fits everyone! ✅ Works for all ages ✅ Free & open (Apache 2.0) ✅ Privacy-friendly (no scans) ✅ Simple parameters Blog: tinyurl.com/5fsekm9z Paper: tinyurl.com/mtrw57ap Code: github.com/naver/anny Demo:…


Joonkyu Min repostou

today, we’re open sourcing the largest egocentric dataset in history. - 10,000 hours - 2,153 factory workers - 1,080,000,000 frames the era of data scaling in robotics is here. (thread)


Joonkyu Min repostou

Meet BFM-Zero: A Promptable Humanoid Behavioral Foundation Model w/ Unsupervised RL👉 lecar-lab.github.io/BFM-Zero/ 🧩ONE latent space for ALL tasks ⚡Zero-shot goal reaching, tracking, and reward optimization (any reward at test time), from ONE policy 🤖Natural recovery & transition


Joonkyu Min repostou

What if robots could improve themselves by learning from their own failures in the real-world? Introducing 𝗣𝗟𝗗 (𝗣𝗿𝗼𝗯𝗲, 𝗟𝗲𝗮𝗿𝗻, 𝗗𝗶𝘀𝘁𝗶𝗹𝗹) — a recipe that enables Vision-Language-Action (VLA) models to self-improve for high-precision manipulation tasks. PLD…


Joonkyu Min repostou

Apple uses gaussian splatting to render Apple Vision Pro Personas. No wonder they look so great.

LeakerApple's tweet image. Apple uses gaussian splatting to render Apple Vision Pro Personas. No wonder they look so great.

Joonkyu Min repostou

Ever want to enjoy all the privileged information in sim while seamlessly transferring to the real world? How can we correct policy mistakes after deployment? 👉Introducing GSWorld, a real2sim2real photo-realistic simulator with interaction physics with fully open-sourced code.


Joonkyu Min repostou

To push self-driving into situations wilder than reality, we built a neural network world simulator that can create entirely synthetic worlds for the Tesla to drive in. Video below is fully generated & not a real video


Joonkyu Min repostou

Simulation drives robotics progress, but how do we close the reality gap? Introducing GaussGym: an open-source framework for learning locomotion from pixels with ultra-fast parallelized photorealistic rendering across >4,000 iPhone, GrandTour, ARKit, and Veo scenes! Thread 🧵


I’ll be presenting our work “CF3: Compact and Fast 3D Feature Fields” on the last day of @ICCVConference, together with Hyunjoon Lee! If you’re attending ICCV, feel free to drop by on Thursday, Oct 23 — I’d love to chat about anything 3D or robotics as well 👋 🖥️ Demo Session 5…


Joonkyu Min repostou

Should robots have eyeballs? Human eyes move constantly and use variable resolution to actively gather visual details. In EyeRobot (eyerobot.net) we train a robot eyeball entirely with RL: eye movements emerge from experience driven by task-driven rewards.


Joonkyu Min repostou

I'd even go as far and say that quality doesn't matter either, because we don't yet know what is "good" data in the long run. The reason that Scale etc. make sense for CV/NLP is that better data -> better performance on some tasks of interest (better AP for 3D detection on…

Every week, another batch of robot data startups appears, selling payload dirt in a gold rush. But the market dynamics that created Scale and Surge don’t exist yet in robotics. Most naive robot trajectories are literally useless, quality is far more important than quantity.



Joonkyu Min repostou

How can we help *any* image-input policy generalize better? 👉 Meet PEEK 🤖 — a framework that uses VLMs to decide *where* to look and *what* to do, so downstream policies — from ACT, 3D-DA, or even π₀ — generalize more effectively! 🧵


Joonkyu Min repostou

🚀 Introducing N2M! In mobile manipulation, the performance of a manipulation policy is very sensitive to the robot’s initial pose. N2M guides the robot to a suitable pose for executing the manipulation policy. N2M comes with 5 key features - Check them out in the posts below!


Joonkyu Min repostou

Come see what robot learning can do for surgical automation! We’re excited to host the first Workshop on Automating Robotic Surgery with an amazing lineup of speakers. 🗓️ Sept. 27 09:30AM - 12:30PM 📍 Floor 3F, Room E7 🌐 …ng-robotic-surgery-workshop.github.io #CoRL2025 #CoRL @corl_conf

MinhoHeo's tweet image. Come see what robot learning can do for surgical automation!
We’re excited to host the first Workshop on Automating Robotic Surgery with an amazing lineup of speakers.

🗓️ Sept. 27 09:30AM - 12:30PM
📍 Floor 3F, Room E7
🌐 …ng-robotic-surgery-workshop.github.io

#CoRL2025 #CoRL @corl_conf

Joonkyu Min repostou

Check out our #IROS2025 work on robot safety! We show low-latency teleop up to the limits of the hardware's capabilities, all while enforcing hundreds of CBF constraints at kilohertz control rates. 📄 arxiv.org/pdf/2503.06736 🌐 stanfordasl.github.io/oscbf/ With @danielpmorton


Joonkyu Min repostou

Here is my talk at @MIT (after some delay😅) I made this talk last year when I was thinking about a paradigm shift. This delayed posting is timely as we just released o1, which I believe is a new paradigm. It's a good time to zoom out for high level thinking. (1/11)


United States Tendências

Loading...

Something went wrong.


Something went wrong.