jsfiredrice's profile picture. Research Scientist @ The AI Institute. Making better brains for robots. #Robotics.

Jinghuan Shang

@jsfiredrice

Research Scientist @ The AI Institute. Making better brains for robots. #Robotics.

Pinned

#CoRL2024 accepted!! Theia: Distilling Diverse Vision Foundation Models for Robot Learning. Theia is smaller but more powerful than off-the-shelf vision models in robotic tasks, and can generate features of SAM and DINOv2! Code, demo: theia.theaiinstitute.com Thank my co-authors!

jsfiredrice's tweet image. #CoRL2024 accepted!! Theia: Distilling Diverse Vision Foundation Models for Robot Learning.
Theia is smaller but more powerful than off-the-shelf vision models in robotic tasks, and can generate features of SAM and DINOv2!
Code, demo: theia.theaiinstitute.com
Thank my co-authors!

Jinghuan Shang reposted

Loving the energy @corl_conf 2025 Largest CoRL ever with more than 2400 in-person participants!

KanuGulati's tweet image. Loving the energy @corl_conf 2025 Largest CoRL ever with more than 2400 in-person participants!
KanuGulati's tweet image. Loving the energy @corl_conf 2025 Largest CoRL ever with more than 2400 in-person participants!

Honored to be part of our great #CoRL2025 organizers and hope everyone enjoy @corl_conf!

jsfiredrice's tweet image. Honored to be part of our great #CoRL2025 organizers and hope everyone enjoy @corl_conf!
jsfiredrice's tweet image. Honored to be part of our great #CoRL2025 organizers and hope everyone enjoy @corl_conf!

Jinghuan Shang reposted

#CoRL2025 Sponsor exhibitions corl.org/program/exhibi… We are immensely grateful to our sponsors for being the driving force behind #CoRL2025.

corl_conf's tweet image. #CoRL2025 Sponsor exhibitions corl.org/program/exhibi…
We are immensely grateful to our sponsors for being the driving force behind #CoRL2025.
corl_conf's tweet image. #CoRL2025 Sponsor exhibitions corl.org/program/exhibi…
We are immensely grateful to our sponsors for being the driving force behind #CoRL2025.

Jinghuan Shang reposted

✈Two days to go for #CoRL2025! Some tips for the participants: Venue information: corl.org/attending/venu… Local tips: corl.org/attending/loca… Map for main program: corl.org/program/main-c… Map for workshop: corl.org/program/worksh… Registration kiosk opens at 7AM. Safe trip!


Jinghuan Shang reposted

🎤Free K-Pop Concert Tickets for #CoRL2025 Attendees! Yeongdong-daero K-POP Concert will be right next to our venue on Sep 27, from 7-9 PM. We have 300 complimentary tickets available at the registration desk starting at 12PM Sep 27, on a first-come, first-served basis.


Jinghuan Shang reposted

New tricks loading ...


Jinghuan Shang reposted

Reinforcement learning is used to speed the production of behavior for the @BostonDynamics Atlas humanoid robot.  At the heart of the learning process is a physics-based simulator that generates training data for a variety of maneuvers.


Jinghuan Shang reposted

Unitree H1: Humanoid Robot Makes Its Debut at the Spring Festival Gala 🥰 Hello everyone, let me introduce myself again. I am Unitree H1 "Fuxi". I am now a comedian at the Spring Festival Gala, hoping to bring joy to everyone. Let’s push boundaries every day and shape the future…


Jinghuan Shang reposted

Introducing Theia, a vision foundation model for robotics developed by our team at the Institute. By using off-the-shelf vision foundation models as a basis, Theia generates rich visual representations for robot policy learning at a lower computation cost. theaiinstitute.com/news/theia


Jinghuan Shang reposted

I am extremely pleased to announce that CoRL 2025 will be in Seoul, Korea! The organizing team includes myself and @gupta_abhinav_ as general chairs, and @JosephLim_AI, @songshuran, and Hae-Won Park (KAIST) as program chairs.

ryoo_michael's tweet image. I am extremely pleased to announce that CoRL 2025 will be in Seoul, Korea! The organizing team includes myself and @gupta_abhinav_ as general chairs, and @JosephLim_AI, @songshuran, and Hae-Won Park (KAIST) as program chairs.

Jinghuan Shang reposted

Our team is presenting work at the Conference on Robot Learning, @corl_conf, in Munich, Germany this week! Learn more about our accepted research — theaiinstitute.com/news/corl-roun…


Jinghuan Shang reposted

Our team has arrived in Munich and we're thrilled to present this work at the LangRob Workshop @ #CoRL2024 as a spotlight presentation on Nov. 9 morning. Stay tuned!

🚀 Excited to share our latest project: LLaRA - Supercharging Robot Learning Data for Vision-Language Policy! 🤖✨ We create a framework to turn robot expert trajectories into conversation-style data and other auxiliary data for instruction tuning. More details to come! (1/N)

XiangLi54505720's tweet image. 🚀 Excited to share our latest project: LLaRA - Supercharging Robot Learning Data for Vision-Language Policy! 🤖✨

We create a framework to turn robot expert trajectories into conversation-style data and other auxiliary data for instruction tuning. More details to come! (1/N)
XiangLi54505720's tweet image. 🚀 Excited to share our latest project: LLaRA - Supercharging Robot Learning Data for Vision-Language Policy! 🤖✨

We create a framework to turn robot expert trajectories into conversation-style data and other auxiliary data for instruction tuning. More details to come! (1/N)


Jinghuan Shang reposted

Introducing AdaCache, a training-free inference accleration method for video DiTs. It allocates compute tailored to each video generation, maximizing quality-latency trade-off. project-page: adacache-dit.github.io code: github.com/AdaCache-DiT/A… arxiv: arxiv.org/pdf/2411.02397


Jinghuan Shang reposted

#CoRL2024 accepted!! Theia: Distilling Diverse Vision Foundation Models for Robot Learning. Theia is smaller but more powerful than off-the-shelf vision models in robotic tasks, and can generate features of SAM and DINOv2! Code, demo: theia.theaiinstitute.com Thank my co-authors!

jsfiredrice's tweet image. #CoRL2024 accepted!! Theia: Distilling Diverse Vision Foundation Models for Robot Learning.
Theia is smaller but more powerful than off-the-shelf vision models in robotic tasks, and can generate features of SAM and DINOv2!
Code, demo: theia.theaiinstitute.com
Thank my co-authors!

Jinghuan Shang reposted

We will present our #COLM2024 paper, Does RoBERTa Perform Better than BERT in Continual Learning: An Attention Sink Perspective, on Monday 11:00 AM – 1:00 PM at #20 Poster Area. Please stop by if you are interested! Paper: openreview.net/pdf?id=VHhwhmt… Code: github.com/StonyBrookNLP/…

Bai1994Xueying's tweet image. We will present our #COLM2024 paper, Does RoBERTa Perform Better than BERT in Continual Learning: An Attention Sink Perspective, on Monday 11:00 AM – 1:00 PM at #20 Poster Area.  Please stop by if you are interested! 

Paper: openreview.net/pdf?id=VHhwhmt…
Code: github.com/StonyBrookNLP/…

Jinghuan Shang reposted

Atlas doing a quick warm up before work.


Want to know visual instruction tuning for robotics? 🤖Check our latest work -- LLaRA

🚀 Excited to share our latest project: LLaRA - Supercharging Robot Learning Data for Vision-Language Policy! 🤖✨ We create a framework to turn robot expert trajectories into conversation-style data and other auxiliary data for instruction tuning. More details to come! (1/N)

XiangLi54505720's tweet image. 🚀 Excited to share our latest project: LLaRA - Supercharging Robot Learning Data for Vision-Language Policy! 🤖✨

We create a framework to turn robot expert trajectories into conversation-style data and other auxiliary data for instruction tuning. More details to come! (1/N)
XiangLi54505720's tweet image. 🚀 Excited to share our latest project: LLaRA - Supercharging Robot Learning Data for Vision-Language Policy! 🤖✨

We create a framework to turn robot expert trajectories into conversation-style data and other auxiliary data for instruction tuning. More details to come! (1/N)


#NeurIPS2023 🔔 Wondering an agent can learn how to see 👀 to help act 🦾? Come to see our #ActiveVision #RL with great potential! Time: Thursday, Dec 14 10:45 - 12:45 CST Venue: Great Hall & Hall B1+B2 (level 1) #1501 Everything: elicassion.github.io/sugarl/sugarl.…


Jinghuan Shang reposted

What if you had four innocent-looking images, that when combined made a new secret image? Or two images, that when rotated at different angles, produce entirely different scenes? We use diffusion models to do this, with physical images! Read the thread for more details!


Loading...

Something went wrong.


Something went wrong.