MITRobotics's profile picture. We are the Interactive Robotics Group at MIT, a part of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Aero/Astro Department.

MITRobotics

@MITRobotics

We are the Interactive Robotics Group at MIT, a part of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Aero/Astro Department.

고정된 트윗

.@julie_a_shah on the #futureofwork in @NYTmag: how can robots work with (not replace) people to make lives better? mobile.nytimes.com/2017/02/23/mag…


MITRobotics 님이 재게시함

.@MIT_CSAIL PhD candidate Felix Yanwei Wang is in the final year of his program working with the lab’s Interactive Robotics Group, researching robot learning, specifically inference-time policy alignment through human interactions. Read more about Felix: bit.ly/445dEtA

csail_alliances's tweet image. .@MIT_CSAIL PhD candidate Felix Yanwei Wang is in the final year of his program working with the lab’s Interactive Robotics Group, researching robot learning, specifically inference-time policy alignment through human interactions.

Read more about Felix: bit.ly/445dEtA

MITRobotics 님이 재게시함

Want your robot to clean the kitchen your way? 🧹✨ 🔗yanweiw.github.io/itps/" Introducing Inference-Time Policy Steering: a training-free method that lets you specify where and how to manipulate objects, so you can guide non-interactive policies to align with your preferences!


MITRobotics 님이 재게시함

Excited to present our #NeurIPS2024 Oral talk! 🚀 Enhancing Preference-based Linear Bandits via Human Response Time Coffee or tea? If you choose instantly, you likely have a strong preference. How can AI leverage this psychological insight to better learn human preferences?…

ShenLiRobot's tweet image. Excited to present our #NeurIPS2024 Oral talk! 🚀

Enhancing Preference-based Linear Bandits via Human Response Time
Coffee or tea? If you choose instantly, you likely have a strong preference. How can AI leverage this psychological insight to better learn human preferences?…

Excited to share our new work: Enhancing Preference-based Linear Bandits via Human Response Time ⏱️🤖 @edgeyyzhang, Zhaolin Ren, Prof. Na Li, @ClaireYLiang, Prof. @julie_a_shah 👉 arxiv.org/abs/2409.05798 We show that human response times provide information about human…

ShenLiRobot's tweet image. Excited to share our new work: Enhancing Preference-based Linear Bandits via Human Response Time ⏱️🤖
@edgeyyzhang, Zhaolin Ren, Prof. Na Li, @ClaireYLiang, Prof. @julie_a_shah
👉 arxiv.org/abs/2409.05798

We show that human response times provide information about human…


MITRobotics 님이 재게시함

Announcing Versatile Demonstration Interface (VDI) – a tool for collaborative robots that makes it easier to collect task demonstrations using three common Learning from Demonstration approaches.


Talk to Serena at AAAI about how standard reward tuning via trial-and-error has many lurking dangers!

Excited to share our new work: Enhancing Preference-based Linear Bandits via Human Response Time ⏱️🤖 @edgeyyzhang, Zhaolin Ren, Prof. Na Li, @ClaireYLiang, Prof. @julie_a_shah 👉 arxiv.org/abs/2409.05798 We show that human response times provide information about human…

ShenLiRobot's tweet image. Excited to share our new work: Enhancing Preference-based Linear Bandits via Human Response Time ⏱️🤖
@edgeyyzhang, Zhaolin Ren, Prof. Na Li, @ClaireYLiang, Prof. @julie_a_shah
👉 arxiv.org/abs/2409.05798

We show that human response times provide information about human…


Neat work combining ideas from formal methods, stable control policies, and imitation learning!

How to guarantee successful imitation of multi-step tasks despite arbitrary perturbations? 1-2 demos + a logic formula of task specification. See our #CoRL2022 oral talk today at 4:30p! Paper: yanweiw.github.io/tli (with @robo_kween @shenli_robotics @ankitjs @julie_a_shah)



MITRobotics 님이 재게시함

Before everyone flees twitter... new paper coming out at NeurIPS! Humans compress meanings into complexity limited discrete representations (words). Can neural nets learn similar communication? Yes! (1/7)


MITRobotics 님이 재게시함

Super grateful for this chance to continue exciting *interdisciplinary* research. Thanks to my advisor, @julie_a_shah, but also so many collaborators from other departments (@roger_p_levy and @NogaZaslavsky) and inspiring labmates and researchers.

Amazon and @MIT_SCC announced their first set of Amazon Fellows as part of their Science Hub, which aims to expand participation in AI, robotics, and other fields. They will receive funding to conduct independent research projects at MIT. Meet the fellows. #MachineLearning



MITRobotics 님이 재게시함

New journal paper on Latent Space Alignment! Neural agents learn latent representations spaces, but often each agent learns its own idiosyncratic space. How can we align those space among agents or even with humans? tandfonline.com/doi/full/10.10…

tandfonline.com

Latent Space Alignment Using Adversarially Guided Self-Play

We envision a world in which robots serve as capable partners in heterogeneous teams composed of other robots or humans. A crucial step towards such a world is enabling robots to learn to use the s...


MITRobotics 님이 재게시함

Happy to announce... Well, this paper didn't get in, but I still think it's neat. Using the same probe-based method for testing if language models use representations of syntax, we can "fix" RL agent perception (e.g., notice an oncoming car): arxiv.org/abs/2201.12938


MITRobotics 님이 재게시함

It's Monday morning, which means it's the perfect time to start thinking about information theory, brains, and neural nets! Submit to #InfoCog2022 @NeurIPSConf Organized by @NogaZaslavsky @gershbrain @sepalmerNeuro @C4COMPUTATION sites.google.com/view/infocog-n…


MITRobotics 님이 재게시함

📣 Very excited to announce our in-person #NeurIPS2022 workshop on Information-Theoretic Principles in Cognitive Systems! Check out our lineup of invited speakers and CFP, submit short papers by September 19 sites.google.com/view/infocog-n… #InfoCog2022 @NeurIPSConf


MITRobotics 님이 재게시함

New paper (to appear in ICML)! Using a new prototype-based classifier, we show how notions of fair and hierarchical classification are tightly related, and how we can directly control "concept relationships" to switch between modes.

MycalTucker's tweet image. New paper (to appear in ICML)! Using a new prototype-based classifier, we show how notions of fair and hierarchical classification are tightly related, and how we can directly control "concept relationships" to switch between modes.

MITRobotics 님이 재게시함

What a great start of @ieee_ras_icra. The #icra2022 workshop on #cobots and @workofthefuture organised by @robo_kween, @julie_a_shah, Chris Fourie, Ben Armstrong & co-organisers was wonderful! sites.google.com/view/icra22ws-…


Loading...

Something went wrong.


Something went wrong.