SLED_AI's profile picture. Situated Language and Embodied Dialogue (SLED) research lab at @michigan_AI, led by Joyce Chai.

Michigan SLED Lab

@SLED_AI

Situated Language and Embodied Dialogue (SLED) research lab at @michigan_AI, led by Joyce Chai.

Michigan SLED Lab reposted

Here's how to babysit a language model from scratch! Research by @ziqiao_ma, Zekun Wang & Joyce Chai shows that interactive language learning with teacher demonstrations and student trials, can facilitate efficient word learning in language models: youtube.com/watch?v=uBrXEo…

michigan_AI's tweet card. [LLMCog@ICML2024] Babysit A Language Model From Scratch

youtube.com

YouTube

[LLMCog@ICML2024] Babysit A Language Model From Scratch


Michigan SLED Lab reposted

🚨 Excited to share SketchVerify — a framework that scales trajectory planning for video generation. ➡️ Sketch-level motion previews let us search dozens of trajectory candidates instantly — without paying the cost of the time-consuming diffusion process. ➡️ A multimodal…


Michigan SLED Lab reposted

Will be at #NeurIPS2025 (San Diego) Dec 1-9, then in the Bay Area until the 14th. Hmu if you wanna grab coffee and talk about totally random stuff. Thread with a few things I’m excited about. P.S. 4 NeurIPS papers all started pre-May 2024 and took ~1 year of polishing...so…

ziqiao_ma's tweet image. Will be at #NeurIPS2025 (San Diego) Dec 1-9, then in the Bay Area until the 14th. Hmu if you wanna grab coffee and talk about totally random stuff. 

Thread with a few things I’m excited about. 
P.S. 4 NeurIPS papers all started pre-May 2024 and took ~1 year of polishing...so…

Michigan SLED Lab reposted

Still wrapping up a few reality-check experiments and polishing the tutorial structure ... but we're excited! P.S. Sadly the ARC-AGI team can't join the tutorial panel this time due to conflict of schedule, but they’ll be with us at the @LAW2025_NeurIPS later in the NeurIPS…

Trying to decide what to do on the first day of #NeurIPS2025? Check out my, @ziqiao_ma, and @xiangyue96's tutorial, "The Science of Benchmarking: What's Measured, What's Missing, What's Next" on December 2 from 1:30 to 4:00pm. What will we cover? 1/3

m2saxon's tweet image. Trying to decide what to do on the first day of #NeurIPS2025? 

Check out my, @ziqiao_ma, and @xiangyue96's tutorial, "The Science of Benchmarking: What's Measured, What's Missing, What's Next" on December 2 from 1:30 to 4:00pm.  

What will we cover?  

1/3


👀

I’ve always wanted to write an open-notebook research blog to (i) show the chain of thought behind how we formed hypotheses, designed experiments, and articulated findings, and (ii) lay out all the intermediate results that did not make it into the final paper, including negative…

ziqiao_ma's tweet image. I’ve always wanted to write an open-notebook research blog to (i) show the chain of thought behind how we formed hypotheses, designed experiments, and articulated findings, and (ii) lay out all the intermediate results that did not make it into the final paper, including negative…


Michigan SLED Lab reposted

Thrilled to share that our paper “Towards Bidirectional Human-AI Alignment” has been accepted to #NeurIPS2025 (Position Track)! 🎉 👫<>🤖We argue for an explicit reflection on what we mean by “alignment”, and to take into account the bidirectional, dynamic interactions between…

huashen218's tweet image. Thrilled to share that our paper “Towards Bidirectional Human-AI Alignment” has been accepted to #NeurIPS2025 (Position Track)! 🎉

👫&amp;lt;&amp;gt;🤖We argue for an explicit reflection on what we mean by “alignment”, and to take into account the bidirectional, dynamic interactions between…
huashen218's tweet image. Thrilled to share that our paper “Towards Bidirectional Human-AI Alignment” has been accepted to #NeurIPS2025 (Position Track)! 🎉

👫&amp;lt;&amp;gt;🤖We argue for an explicit reflection on what we mean by “alignment”, and to take into account the bidirectional, dynamic interactions between…
huashen218's tweet image. Thrilled to share that our paper “Towards Bidirectional Human-AI Alignment” has been accepted to #NeurIPS2025 (Position Track)! 🎉

👫&amp;lt;&amp;gt;🤖We argue for an explicit reflection on what we mean by “alignment”, and to take into account the bidirectional, dynamic interactions between…

📢Is current “human-AI alignment” research clarified and comprehensive? 🤔 We systematically reviewed 400+ papers across HCI, NLP, and ML to develop a framework for 👫<>🤖"Bidirectional Human-AI Alignment", encompassing the dual paths of “Aligning AI to Human” and “Aligning Human…

huashen218's tweet image. 📢Is current “human-AI alignment” research clarified and comprehensive? 🤔 We systematically reviewed 400+ papers across HCI, NLP, and ML to develop a framework for 👫&amp;lt;&amp;gt;🤖&quot;Bidirectional Human-AI Alignment&quot;, encompassing the dual paths of “Aligning AI to Human” and “Aligning Human…


Michigan SLED Lab reposted

Over the past few months, I’ve heard the same complaint from nearly every collaborator working on computational cogsci + behavioral and mechanistic interpretability: “Open-source VLMs are a pain to run, let alone analyze.” We finally decided to do something about it (thanks…

ziqiao_ma's tweet image. Over the past few months, I’ve heard the same complaint from nearly every collaborator working on computational cogsci + behavioral and mechanistic interpretability: 

“Open-source VLMs are a pain to run, let alone analyze.”

We finally decided to do something about it (thanks…

Michigan SLED Lab reposted

Thanks @_akhaliq for sharing our work! Aim and Grasp! AimBot introduces a new design to leverage visual cues for robots - similar to scope reticles in shooting games. Let's equip your VLA models with low-cost visual augmentation for better manipulation! aimbot-reticle.github.io

AimBot A Simple Auxiliary Visual Cue to Enhance Spatial Awareness of Visuomotor Policies



Michigan SLED Lab reposted

Thanks @_akhaliq for posting our work! And I'm happy to share that AimBot 🎯 is accepted to CoRL 2025 @corl_conf! See you in Seoul! Project webpage: aimbot-reticle.github.io Thanks to my amazing co-lead @YinpeiD, co-authors, and our advisors @NimaFazeli7, @SLED_AI

AimBot A Simple Auxiliary Visual Cue to Enhance Spatial Awareness of Visuomotor Policies



Michigan SLED Lab reposted

Excited to announce the #NeurIPS2025 Workshop on Bridging Language, Agent, and World Models for Reasoning and Planning (LAW) sites.google.com/view/law-2025 The LAW 2025 workshop brings together Language models, Agent models, and World models (L-A-W). It aims to spark bold…

📢 Thrilled to announce LAW 2025 workshop, Bridging Language, Agent, and World Models, at #NeurIPS2025 this December in San Diego! 🌴🏖️ 🎉 Join us in exploring the exciting intersection of #LLMs, #Agents, #WorldModels! 🧠🤖🌍 🔗 sites.google.com/view/law-2025 #ML #AI #GenerativeAI 1/

LAW2025_NeurIPS's tweet image. 📢 Thrilled to announce LAW 2025 workshop, Bridging Language, Agent, and World Models, at #NeurIPS2025 this December in San Diego! 🌴🏖️

🎉 Join us in exploring the exciting intersection of #LLMs, #Agents, #WorldModels! 🧠🤖🌍

🔗 sites.google.com/view/law-2025
 #ML #AI #GenerativeAI
1/


Michigan SLED Lab reposted

Unfortunately, I’ll be missing #ACL2025NLP this year — but here are a few things I’m excited about! 👇 Feel free to DM me if you’d like to chat.

ziqiao_ma's tweet image. Unfortunately, I’ll be missing #ACL2025NLP this year — but here are a few things I’m excited about! 👇
Feel free to DM me if you’d like to chat.

Michigan SLED Lab reposted

Excited to be in Vienna for #ACL2025! We will present 1 poster and 1 oral. Come say hi if you're around! 👋 📌Poster (Tutoring Agents) 🗓️Monday, July 28 18:00–19:30 | 📍Hall 4/5 (Session 5) 📌Oral (Safety Mechanisms) 🗓️Wednesday, July 30 09:00–10:30 |📍Room 1.85 (Session 11)

jwanglvy's tweet image. Excited to be in Vienna for #ACL2025! We will present 1 poster and 1 oral. Come say hi if you&apos;re around! 👋

📌Poster (Tutoring Agents)
🗓️Monday, July 28 18:00–19:30 | 📍Hall 4/5 (Session 5)

📌Oral (Safety Mechanisms)
🗓️Wednesday, July 30 09:00–10:30 |📍Room 1.85 (Session 11)
jwanglvy's tweet image. Excited to be in Vienna for #ACL2025! We will present 1 poster and 1 oral. Come say hi if you&apos;re around! 👋

📌Poster (Tutoring Agents)
🗓️Monday, July 28 18:00–19:30 | 📍Hall 4/5 (Session 5)

📌Oral (Safety Mechanisms)
🗓️Wednesday, July 30 09:00–10:30 |📍Room 1.85 (Session 11)

Michigan SLED Lab reposted

📣 Excited to announce SpaVLE: #NeurIPS2025 Workshop on Space in Vision, Language, and Embodied AI! 👉 …vision-language-embodied-ai.github.io 🦾Co-organized with an incredible team → @fredahshi · @maojiayuan · @DJiafei · @ManlingLi_ · David Hsu · @Kordjamshidi 🌌 Why Space & SpaVLE? We…

ziqiao_ma's tweet image. 📣 Excited to announce SpaVLE: #NeurIPS2025 Workshop on Space in Vision, Language, and Embodied AI! 

👉 …vision-language-embodied-ai.github.io

🦾Co-organized with an incredible team → @fredahshi · @maojiayuan · @DJiafei · @ManlingLi_ · David Hsu · @Kordjamshidi 

🌌 Why Space &amp;amp; SpaVLE?
We…

Michigan SLED Lab reposted

Thrilled to share that VEGGIE is accepted to #ICCV2025! 🎉 Check out the full thread by @shoubin621 for details. Funny enough — it’s been 6 years since I came to the US, and this might be my first time setting foot in Hawaii. 🌴

Meet VEGGIE🥦@AdobeResearch VEGGIE is a video generative model trained solely with diffusion loss, designed for both video concept grounding and instruction-based editing. It effectively handles diverse video concept editing tasks by leveraging pixel-level grounded training in a…



Michigan SLED Lab reposted

Our study on pragmatic generation is accepted to #COLM2025! Missed the first COLM last year (no suitable ongoing project at the time😅). Heard it’s a great place to connect with LM folks, excited to join for round two finally.

Vision-Language Models (VLMs) can describe the environment, but can they refer within it? Our findings reveal a critical gap: VLMs fall short of pragmatic optimality. We identify 3 key failures of pragmatic competence in referring expression generation with VLMs: (1) cannot…



Michigan SLED Lab reposted

Can we scale 4D pretraining to learn general space-time representations that reconstruct an object from a few views at any time to any view at any other time? Introducing 4D-LRM: a Large Space-Time Reconstruction Model that ... 🔹 Predicts 4D Gaussian primitives directly from…


❤️

We had great discussions today hosting Joyce Chai @mbzuai! Starting new collaborations, expanding our research views

thamar_solorio's tweet image. We had great discussions today hosting Joyce Chai @mbzuai!
Starting new collaborations, expanding our research views
thamar_solorio's tweet image. We had great discussions today hosting Joyce Chai @mbzuai!
Starting new collaborations, expanding our research views


Loading...

Something went wrong.


Something went wrong.