joeaortiz's profile picture. @GoogleDeepMind researching world models | Ex @AIatMeta (FAIR), PhD @imperialcollege

Joe Ortiz

@joeaortiz

@GoogleDeepMind researching world models | Ex @AIatMeta (FAIR), PhD @imperialcollege

Genie3 is a real-time, interactive and general world model! Excited to see it used for training agents. It's also really fun to play around with.

#Genie3 is a real, interactive, playable experience. We're having so much fun with it at work---in between meetings, during breaks. Here's @RuiqiGao, @joeaortiz, @ChrisWu6080 following a pack of polar bears through a New York City street! Check out more on the webpage:…



Joe Ortiz reposted

Genie 3 feels like a watershed moment for world models 🌐: we can now generate multi-minute, real-time interactive simulations of any imaginable world. This could be the key missing piece for embodied AGI… and it can also create beautiful beaches with my dog, playable real time


Introducing our new MBRL agent for Craftax-classic. Our agent is both SOTA and the first to exceed human expert reward! 🕹️ The method combines new techniques for learning and planning with transformer world models. See details in @antoine_dedieu's 🧵 arxiv.org/abs/2502.01591

Happy to share our new preprint “Improving Transformer World Models for Data-Efficient RL”: arxiv.org/abs/2502.01591 We propose a ladder of improvements to model-based RL and achieve for the first time a superhuman reward on the challenging Craftax-classic benchmark! 1/10

antoine_dedieu's tweet image. Happy to share our new preprint “Improving Transformer World Models for Data-Efficient RL”: arxiv.org/abs/2502.01591

We propose a ladder of improvements to model-based RL and achieve for the first time a superhuman reward on the challenging Craftax-classic benchmark!

1/10


Robust visual + tactile perception is key for robot dexterity In our new @SciRobotics paper, we use neural fields for in-hand reconstruction + pose estimation of novel objects. science.org/doi/10.1126/sc… See @Suddhus awesome thread below

joeaortiz's tweet image. Robust visual + tactile perception is key for robot dexterity

In our new @SciRobotics paper, we use neural fields for in-hand reconstruction + pose estimation of novel objects. 
science.org/doi/10.1126/sc…

See @Suddhus awesome thread below

For robot dexterity, a missing piece is general, robust perception. Our new @SciRobotics work combines multimodal sensing with neural representations to perceive novel objects in-hand. 🎲 Featured on the cover of the November issue! #ScienceRoboticsResearch 🧵1/9



Check out our new paper where we learn multi-step diffusion world models and use them for planning with model predictive control! We're excited to see if these ideas can be applied to dexterous robot manipulation 🤖

Excited to share our new paper on "Diffusion Model Predictive Control" (D-MPC). Key idea: leverage diffusion models to learn a trajectory-level (not just single-step) world model to mitigate compounding errors when doing rollouts. arxiv.org/abs/2410.05364



Joe Ortiz reposted

Can vision and language models be extended to include touch? Yes! We will present a new touch-vision-language dataset collected in the wild and Touch-Vision-Language Models (TVLMs) trained on this dataset at #ICML2024. 🙌 1/6 tactile-vlm.github.io

letian_fu's tweet image. Can vision and language models be extended to include touch? Yes! We will present a new touch-vision-language dataset collected in the wild and Touch-Vision-Language Models (TVLMs) trained on this dataset at #ICML2024. 🙌 1/6
tactile-vlm.github.io

Humans heavily rely on estimating object-environment (extrinsic) contacts for many insertion tasks 🖐️ We show that estimating extrinsic contacts from gripper-object contact using tactile sensors results in more successful and efficient robot insertion policies!

🤔Are extrinsic contacts useful for manipulation policies? Neural Contact Fields estimate extrinsic contacts from touch. However, its utility in real-world tasks remains unknown. We improve NCF to enable sim-to-real transfer and use it to train policies for insertion tasks.



Joe Ortiz reposted

🔥Theseus 0.2.0 release is out! github.com/facebookresear… 🚀Brings it to PyTorch 2.0 🚀Introduces torchlie and torchkin - efficient standalone libs for differentiable Lie groups and kinematics More updates and round up of few research projects in community enabled by Theseus 🧵👇

mukadammh's tweet image. 🔥Theseus 0.2.0 release is out!
github.com/facebookresear…

🚀Brings it to PyTorch 2.0
🚀Introduces torchlie and torchkin - efficient standalone libs for differentiable Lie groups and kinematics

More updates and round up of few research projects in community enabled by Theseus 🧵👇

Joe Ortiz reposted

See you tomorrow at the Distributed Graph Algorithms for Robotics Workshop at #ICRA2023.

Our workshop on Distributed Graph Algorithms for Robotics at #ICRA2023 in London, May 29th accepting demo/poster submissions to May 2nd. Speakers: @fdellaert, @mhmukadam, @angelaschoellig, @MargaritaChli, @zzznah, @risi1979, @joeaortiz, @lazarox8 and just confirmed @rapideRobot.



Joe Ortiz reposted

Our workshop on Distributed Graph Algorithms for Robotics at #ICRA2023 in London, May 29th accepting demo/poster submissions to May 2nd. Speakers: @fdellaert, @mhmukadam, @angelaschoellig, @MargaritaChli, @zzznah, @risi1979, @joeaortiz, @lazarox8 and just confirmed @rapideRobot.

Scalable and resilient computation in robotics should be distributed, whether over many-robot graphs or within single chips. We present the new Workshop on Distributed Graph Algorithms for Robotics at #ICRA2023 in London sites.google.com/view/distribut…; please submit paper and demos!

AjdDavison's tweet image. Scalable and resilient computation in robotics should be distributed, whether over many-robot graphs or within single chips. We present the new Workshop on Distributed Graph Algorithms for Robotics at #ICRA2023 in London sites.google.com/view/distribut…; please submit paper and demos!


Very happy to be co-organising this workshop and excited for the fantastic speakers! Should be some great discussions on learning in graphs, factor graphs, cellular automata and more. Please consider submitting a paper or demo!

Scalable and resilient computation in robotics should be distributed, whether over many-robot graphs or within single chips. We present the new Workshop on Distributed Graph Algorithms for Robotics at #ICRA2023 in London sites.google.com/view/distribut…; please submit paper and demos!

AjdDavison's tweet image. Scalable and resilient computation in robotics should be distributed, whether over many-robot graphs or within single chips. We present the new Workshop on Distributed Graph Algorithms for Robotics at #ICRA2023 in London sites.google.com/view/distribut…; please submit paper and demos!


Thanks Andy, it's been great working with you on such exciting stuff throughout the PhD! Also thanks to @fdellaert @markvanderwilk for examining

joeaortiz's tweet image. Thanks Andy, it's been great working with you on such exciting stuff throughout the PhD! Also thanks to @fdellaert @markvanderwilk for examining

Congrats to Joe on passing his PhD viva; thanks to examiners @fdellaert @markvanderwilk. Here's his awesome work on bundle adjustment via GBP for fast, distributed optimisation on a graph processor. One day everyone will do estimation like this, they just don't realise it yet :)



Loading...

Something went wrong.


Something went wrong.