dineshjayaraman's profile picture. Assistant Professor at University of Pennsylvania. 
Robot Learning.

http://dineshjayaraman.bsky.social

Dinesh Jayaraman

@dineshjayaraman

Assistant Professor at University of Pennsylvania. Robot Learning. http://dineshjayaraman.bsky.social

Excited to share our recent progress on adapting pretrained VLAs to imbue them with in-context learning capabilities!

Robot AI brains, aka Vision-Language-Action models, cannot adapt to new tasks as easily as LLMs like Gemini, ChatGPT, or Grok. LLMs can adapt quickly with their in-context learning (ICL) capabilities. But can we inject ICL abilities into a pre-trained VLA like pi0? Yes!…



Vibe testing for robots. My students had a lot of fun helping with this effort!

We’re releasing the RoboArena today!🤖🦾 Fair & scalable evaluation is a major bottleneck for research on generalist policies. We’re hoping that RoboArena can help! We provide data, model code & sim evals for debugging! Submit your policies today and join the leaderboard! :) 🧵



A new demonstration of autonomous iterative design guided by a VLM (a la Eureka, DrEureka, Eurekaverse), this time for actual physical design of tools for manipulation. My personal tagline for this paper is: if you can't perform a task well, blame (and improve) your tools! ;)

💡Can robots autonomously design their own tools and figure out how to use them? We present VLMgineer 🛠️, a framework that leverages Vision Language Models with Evolutionary Search to automatically generate and refine physical tool designs alongside corresponding robot action…



To Singapore for ICLR'25! A demonstration of the awesome power of simple retrieval-style biases (RAG-style) for generalizing across many wildly different robotics / game-playing tasks, with a single "generalist" agent.

Is scaling current agent architectures the most effective way to build generalist agents that can rapidly adapt? Introducing 👑REGENT👑, a generalist agent that can generalize to unseen robotics tasks and games via retrieval-augmentation and in-context learning.🧵



Dinesh Jayaraman hat repostet

@ieeeras conferences like @ieee_ras_icra not always fun to attend and I often wonder whether to go despite seeing many friends there. This year, Tamim Asfour+@serena_ivaldi showed in both #ICRA@40 + @HumanoidsConf 2024 that we can do much better! Here are some key lessons:


Dinesh Jayaraman hat repostet

Introducing Eurekaverse 🌎, a path toward training robots in infinite simulated worlds! Eurekaverse is a framework for automatic environment and curriculum design using LLMs. This iterative method creates useful environments designed to progressively challenge the policy during…


Dinesh Jayaraman hat repostet

Congratulations to my colleague @RajeevAlur on winning the 2024 Knuth Prize for foundational contributions to CS! sigact.org/prizes/knuth/c…


Dinesh Jayaraman hat repostet

How can large language models be used in robotics? @dineshjayaraman joins our “Babbage” podcast to explain how AI is helping make robots more capable econ.st/3RscW2l 🎧


Dinesh Jayaraman hat repostet

We are organizing a workshop on task specification at #RSS2024! Consider submitting your latest work to our workshop and attending!

Submit to our #RSS2024 workshop on “Robotic Tasks and How to Specify Them? Task Specification for General-Purpose Intelligent Robots” by June 12th. Join our discussion on what constitutes various task specifications for robots, in what scenarios they are most effective and more!

jasonxyliu's tweet image. Submit to our #RSS2024 workshop on “Robotic Tasks and How to Specify Them? Task Specification for General-Purpose Intelligent Robots” by June 12th.

Join our discussion on what constitutes various task specifications for robots, in what scenarios they are most effective and more!


Had fun speaking with Alok and showing off our group's latest and greatest work for this podcast episode on the rise of robot learning. Have a listen! Work led by @JasonMa2020, @jianing_qian, @JunyaoShi.

Why are robots suddenly getting cleverer? This week on “Babbage” @alokjha explores how advances in AI are bringing about a renaissance in robotics: econ.st/3KNARoV 🎧



Dinesh Jayaraman hat repostet

Congratulations to the ROBO class of 2024! #GRASP #GRASPLab #ROBOGrad #ROBO24


Dinesh Jayaraman hat repostet

You enjoyed #ICRA2024 as much as we did? I may have another opportunity to meet and exchange ideas for you: the Conference on Robot Learning (#CoRL) will take place in Munich in November this year! corl.org/home ** Deadline for papers: June 6, 2024**

What an amazing #ICRA2024 from the LSY team! We saw some incredible research from around the world and are excited to use what we learned in our future research 🤖 For a summary of our talks, with links to papers, videos, and code visit this doc: tiny.cc/icra2024talks

learnsyslab's tweet image. What an amazing #ICRA2024 from the LSY team!

We saw some incredible research from around the world and are excited to use what we learned in our future research 🤖

For a summary of our talks, with links to papers, videos, and code visit this doc: tiny.cc/icra2024talks


People need to carefully sense & process lots of info when we're novices at a skill, e.g. driving/tying shoelaces. Having learned it, we can do it "with our eyes closed." Ed has built an integrative framework for robots to also benefit from privileged training-time sensing!

Does the process of learning change the sensory requirements of a robot learner? If so, how? In our ICLR'24 spotlight, (poster #208, Tuesday 4:30-6:30pm), we investigate the sensory needs of RL agents, and find that beginner policies benefit from more sensing during training!

edward_s_hu's tweet image. Does the process of learning change the sensory requirements of a robot learner? If so, how?

In our ICLR'24 spotlight, (poster #208, Tuesday 4:30-6:30pm), we investigate the sensory needs of RL agents, and find that beginner policies benefit from more sensing during training!


I was skeptical that we'd get our quadruped to turn circus performer and walk on a yoga ball. Turns out it can (mostly)! Check out Jason's thread for how he together with our MS students Hungju Wang and undergrads Will Liang and Sam Wang managed to train this skill, and others!

Introducing DrEureka🎓, our latest effort pushing the frontier of robot learning using LLMs! DrEureka uses LLMs to automatically design reward functions and tune physics parameters to enable sim-to-real robot learning. DrEureka can propose effective sim-to-real configurations…



Dinesh Jayaraman hat repostet

After two years, it is my pleasure to introduce “DROID: A Large-Scale In-the-Wild Robot Manipulation Dataset” DROID is the most diverse robotic interaction dataset ever released, including 385 hours of data collected across 564 diverse scenes in real-world households and offices


Dinesh Jayaraman hat repostet

will it nerf? yep ✅ congrats to @_tim_brooks @billpeeb and colleagues, absolutely incredible results!!

Sora is our first video generation model - it can create HD videos up to 1 min long. AGI will be able to simulate the physical world, and Sora is a key step in that direction. thrilled to have worked on this with @billpeeb at @openai for the past year openai.com/sora



Our lab's work was featured in local news coverage of Penn's new BSE degree in AI: fox29.com/news/universit… Including cool live demos of @JasonMa2020's latest trick, a quadruped dog walking on a yoga ball!


Dinesh Jayaraman hat repostet

Are you a #computervision PhD student close to graduating? Consider participating in the #CVPR2024 Doctoral Consortium event. See link for eligibility details and application: cvpr.thecvf.com/Conferences/20…


Loading...

Something went wrong.


Something went wrong.