
Dinesh Jayaraman
@dineshjayaraman
Assistant Professor at University of Pennsylvania. Robot Learning. http://dineshjayaraman.bsky.social
Was dir gefallen könnte
Excited to share our recent progress on adapting pretrained VLAs to imbue them with in-context learning capabilities!
Robot AI brains, aka Vision-Language-Action models, cannot adapt to new tasks as easily as LLMs like Gemini, ChatGPT, or Grok. LLMs can adapt quickly with their in-context learning (ICL) capabilities. But can we inject ICL abilities into a pre-trained VLA like pi0? Yes!…
Vibe testing for robots. My students had a lot of fun helping with this effort!
We’re releasing the RoboArena today!🤖🦾 Fair & scalable evaluation is a major bottleneck for research on generalist policies. We’re hoping that RoboArena can help! We provide data, model code & sim evals for debugging! Submit your policies today and join the leaderboard! :) 🧵
A new demonstration of autonomous iterative design guided by a VLM (a la Eureka, DrEureka, Eurekaverse), this time for actual physical design of tools for manipulation. My personal tagline for this paper is: if you can't perform a task well, blame (and improve) your tools! ;)
💡Can robots autonomously design their own tools and figure out how to use them? We present VLMgineer 🛠️, a framework that leverages Vision Language Models with Evolutionary Search to automatically generate and refine physical tool designs alongside corresponding robot action…
To Singapore for ICLR'25! A demonstration of the awesome power of simple retrieval-style biases (RAG-style) for generalizing across many wildly different robotics / game-playing tasks, with a single "generalist" agent.
Is scaling current agent architectures the most effective way to build generalist agents that can rapidly adapt? Introducing 👑REGENT👑, a generalist agent that can generalize to unseen robotics tasks and games via retrieval-augmentation and in-context learning.🧵
@ieeeras conferences like @ieee_ras_icra not always fun to attend and I often wonder whether to go despite seeing many friends there. This year, Tamim Asfour+@serena_ivaldi showed in both #ICRA@40 + @HumanoidsConf 2024 that we can do much better! Here are some key lessons:
Introducing Eurekaverse 🌎, a path toward training robots in infinite simulated worlds! Eurekaverse is a framework for automatic environment and curriculum design using LLMs. This iterative method creates useful environments designed to progressively challenge the policy during…
Congratulations to my colleague @RajeevAlur on winning the 2024 Knuth Prize for foundational contributions to CS! sigact.org/prizes/knuth/c…
How can large language models be used in robotics? @dineshjayaraman joins our “Babbage” podcast to explain how AI is helping make robots more capable econ.st/3RscW2l 🎧
We are organizing a workshop on task specification at #RSS2024! Consider submitting your latest work to our workshop and attending!
Submit to our #RSS2024 workshop on “Robotic Tasks and How to Specify Them? Task Specification for General-Purpose Intelligent Robots” by June 12th. Join our discussion on what constitutes various task specifications for robots, in what scenarios they are most effective and more!

Had fun speaking with Alok and showing off our group's latest and greatest work for this podcast episode on the rise of robot learning. Have a listen! Work led by @JasonMa2020, @jianing_qian, @JunyaoShi.
Why are robots suddenly getting cleverer? This week on “Babbage” @alokjha explores how advances in AI are bringing about a renaissance in robotics: econ.st/3KNARoV 🎧
You enjoyed #ICRA2024 as much as we did? I may have another opportunity to meet and exchange ideas for you: the Conference on Robot Learning (#CoRL) will take place in Munich in November this year! corl.org/home ** Deadline for papers: June 6, 2024**
What an amazing #ICRA2024 from the LSY team! We saw some incredible research from around the world and are excited to use what we learned in our future research 🤖 For a summary of our talks, with links to papers, videos, and code visit this doc: tiny.cc/icra2024talks

People need to carefully sense & process lots of info when we're novices at a skill, e.g. driving/tying shoelaces. Having learned it, we can do it "with our eyes closed." Ed has built an integrative framework for robots to also benefit from privileged training-time sensing!
Does the process of learning change the sensory requirements of a robot learner? If so, how? In our ICLR'24 spotlight, (poster #208, Tuesday 4:30-6:30pm), we investigate the sensory needs of RL agents, and find that beginner policies benefit from more sensing during training!

I was skeptical that we'd get our quadruped to turn circus performer and walk on a yoga ball. Turns out it can (mostly)! Check out Jason's thread for how he together with our MS students Hungju Wang and undergrads Will Liang and Sam Wang managed to train this skill, and others!
Introducing DrEureka🎓, our latest effort pushing the frontier of robot learning using LLMs! DrEureka uses LLMs to automatically design reward functions and tune physics parameters to enable sim-to-real robot learning. DrEureka can propose effective sim-to-real configurations…
After two years, it is my pleasure to introduce “DROID: A Large-Scale In-the-Wild Robot Manipulation Dataset” DROID is the most diverse robotic interaction dataset ever released, including 385 hours of data collected across 564 diverse scenes in real-world households and offices
will it nerf? yep ✅ congrats to @_tim_brooks @billpeeb and colleagues, absolutely incredible results!!
Sora is our first video generation model - it can create HD videos up to 1 min long. AGI will be able to simulate the physical world, and Sora is a key step in that direction. thrilled to have worked on this with @billpeeb at @openai for the past year openai.com/sora
Our lab's work was featured in local news coverage of Penn's new BSE degree in AI: fox29.com/news/universit… Including cool live demos of @JasonMa2020's latest trick, a quadruped dog walking on a yoga ball!
Are you a #computervision PhD student close to graduating? Consider participating in the #CVPR2024 Doctoral Consortium event. See link for eligibility details and application: cvpr.thecvf.com/Conferences/20…
United States Trends
- 1. Cowboys 45.9K posts
- 2. James Franklin 43.7K posts
- 3. Drake Maye 14.2K posts
- 4. Pickens 14.1K posts
- 5. Jets 104K posts
- 6. Penn State 59.9K posts
- 7. Steelers 50.4K posts
- 8. Panthers 44.1K posts
- 9. Rico Dowdle 3,756 posts
- 10. Diggs 7,443 posts
- 11. Colts 44.6K posts
- 12. Justin Fields 22K posts
- 13. Saints 52.8K posts
- 14. #Browns 3,100 posts
- 15. Cooper Rush 2,915 posts
- 16. Eberflus 2,341 posts
- 17. Gabriel 52.4K posts
- 18. Zay Flowers 1,948 posts
- 19. #RavensFlock 2,454 posts
- 20. Huntley 3,051 posts
Was dir gefallen könnte
-
David Held
@davheld -
Silvio Savarese
@silviocinguetta -
Abhishek Gupta
@abhishekunique7 -
Deepak Pathak
@pathak2206 -
Shuran Song
@SongShuran -
Pulkit Agrawal
@pulkitology -
Judy Hoffman
@judyfhoffman -
Yuke Zhu
@yukez -
Lerrel Pinto
@LerrelPinto -
Huihan Liu
@huihan_liu -
Joseph Lim
@JosephLim_AI -
Danfei Xu
@danfei_xu -
Srinath Sridhar
@drsrinathsridha -
Xiaolong Wang
@xiaolonw -
Anirudha Majumdar
@Majumdar_Ani
Something went wrong.
Something went wrong.