robot_trainer's profile picture. Director of Robotic Systems @NVIDIA. Isaac Cortex, cobots, geometric methods; PhD CMU, research Max Planck, TTI-C, co-founder Lula Robotics, eng Google, Amazon

Nathan Ratliff

@robot_trainer

Director of Robotic Systems @NVIDIA. Isaac Cortex, cobots, geometric methods; PhD CMU, research Max Planck, TTI-C, co-founder Lula Robotics, eng Google, Amazon

Nathan Ratliff reposted

super excited that we won the best student paper for videomimic! unfortunately i was packing in the hotel during the awards ceremony 😅

Congratulations to the videomimic team for winning the best student paper award at CoRL 2025 🥹🎉 Grateful to the CoRL community for the recognition!

akanazawa's tweet image. Congratulations to the videomimic team for winning the best student paper award at CoRL 2025 🥹🎉  Grateful to the CoRL community for the recognition!


isaac lab is an enabler! without it's tiled rendering dextrah-rgb wouldn't be possible. sim2real rl is the holy grail of robotics, and isaac lab brings together the perfect combination of technologies to legitimately start breaking that into a reality.

Our whitepaper on Isaac Lab is out! Isaac Lab is a natural successor of Isaac Gym that pioneered GPU-accelerated simulation for robotics. It subsumes all the features of Gym and provides the latest advances in simulation technology to robotics researchers. It also supports…

ankurhandos's tweet image. Our whitepaper on Isaac Lab is out! Isaac Lab is a natural successor of Isaac Gym that pioneered GPU-accelerated simulation for robotics. It subsumes all the features of Gym and provides the latest advances in simulation technology to robotics researchers. It also supports…


Nathan Ratliff reposted

Our latest work performs sim2real dexterous grasping using end-to-end depth RL.


our dextrah-rgb code is out! that includes our vectorized geometric fabrics library we've been using for safe control of the robot.

Happy to announce that we have finally open sourced the code for DextrAH-RGB along with Geometric Fabrics: github.com/NVlabs/DEXTRAH github.com/NVlabs/FABRICS



this is really cool. i've always thought learning-based methods were the right approach to global motion generation. nice work! (and all the demos! super robust and general system)

Ever wish a robot could just move to any goal in any environment—avoiding all collisions and reacting in real time? 🚀Excited to share our #CoRL2025 paper, Deep Reactive Policy (DRP), a learning-based motion planner that navigates complex scenes with moving obstacles—directly…



Nathan Ratliff reposted

Regardless of whether you plan to use them in applications, everyone should learn about Gaussian processes, and Bayesian methods. They provide a foundation for reasoning about model construction and all sorts of deep learning behaviour that would otherwise appear mysterious.


hehehe

Unitree G1 had a meltdown mid-performance.



😮🫤🤪🫥🤖

MASSIVE claim in this paper. AI Architectural breakthroughs can be scaled computationally, transforming research progress from a human-limited to a computation-scalable process. So it turns architecture discovery into a compute‑bound process, opening a path to…

rohanpaul_ai's tweet image. MASSIVE claim in this paper. 

AI Architectural breakthroughs can be scaled computationally, transforming research progress from a human-limited to a computation-scalable process. 

So it turns architecture discovery into a compute‑bound process, opening a path to…


make sure you expert makes mistakes and has to explore. there's been a lot of work around ensuring demonstrators have the same information as the robot, but this works shows it's super useful for the demonstrator to have less! super interesting.

Want robot imitation learning to generalize to new tasks? Blindfold your human demonstrator! Best robotics paper at EXAIT Workshop #ICML2025 openreview.net/forum?id=zqfT2… Wait, why does this make sense? Read below!

AvivTamar1's tweet image. Want robot imitation learning to generalize to new tasks? Blindfold your human demonstrator!

Best robotics paper at EXAIT Workshop #ICML2025

openreview.net/forum?id=zqfT2…

Wait, why does this make sense?
Read below!


andrew's explanations are always lucid and insightful. recommend taking a look. deep nets have soft (but flexible) inductive biases preferring simple explanations, and they're able to characterize that rigorously pulling out some decades old theory. super cool.

Excited to be presenting my paper "Deep Learning is Not So Mysterious or Different" tomorrow at ICML, 11 am - 1:30 pm, East Exhibition Hall A-B, E-500. I made a little video overview as part of the ICML process (viewable from Chrome): recorder-v3.slideslive.com/#/share?share=…



hehe

A melon-sized cherry on top :)



science

TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: toyotaresearchinstitute.github.io/lbm1/ One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the…



good points. openai created llms at scale well before chatgpt, but chatgpt made them accessible and that was arguably more impactful. all the recent models have been technically spectacular, but making them universally accessible may again be felt (significantly) more strongly by…

A brief overview of GPT-5 GPT-5 could disappoint some and amaze many. It's a strange contradiction, but I'll try to explain. For “hardcore” users, GPT-5 will be a bit of a disappointment, if the rumors are to be believed. Rumor has it that Sam Altman is not particularly…



Loading...

Something went wrong.


Something went wrong.