jeremy_irvin16's profile picture. CS PhD candidate @Stanford working on AI for climate change and medicine. Previously @ClimateChangeAI, @Microsoft.

Jeremy Irvin

@jeremy_irvin16

CS PhD candidate @Stanford working on AI for climate change and medicine. Previously @ClimateChangeAI, @Microsoft.

Excited to present TEOChat — the first vision-language assistant for temporal Earth observation data — at #ICLR2025 today! 🌍🛰️ 📄 arxiv.org/abs/2410.06234 🔗 iclr.cc/virtual/2025/p… Come by Hall 3 + Hall 2B #109 from 3–5:30pm! @iclr_conf

jeremy_irvin16's tweet image. Excited to present TEOChat — the first vision-language assistant for temporal Earth observation data — at #ICLR2025 today! 🌍🛰️

📄 arxiv.org/abs/2410.06234
🔗 iclr.cc/virtual/2025/p…

Come by Hall 3 + Hall 2B #109 from 3–5:30pm! @iclr_conf

🎉 Thrilled to announce that our work on TEOChat was accepted to #ICLR2025! We present the first vision-language assistant for temporal earth observation data ⏰🌍, capable of tasks like building damage assessment and identifying urban changes over time. More details below 👇

Vision-language models (VLMs) are revolutionizing how we use Earth observation (EO) data, but none could reason over time—a critical need for applications like disaster relief—until now. Introducing TEOChat 🌍🤖, the first VLM for temporal EO data! arxiv.org/abs/2410.06234 1/8

jeremy_irvin16's tweet image. Vision-language models (VLMs) are revolutionizing how we use Earth observation (EO) data, but none could reason over time—a critical need for applications like disaster relief—until now.

Introducing TEOChat 🌍🤖, the first VLM for temporal EO data!

arxiv.org/abs/2410.06234

1/8


The TEOChatlas dataset is now available to download! 🥳 🤗Huggingface dataset: huggingface.co/datasets/jirvi… 🤖Code and instructions to train TEOChat on TEOChatlas: github.com/ermongroup/TEO…

github.com

GitHub - ermongroup/TEOChat: Official code for TEOChat, the first vision-language assistant for...

Official code for TEOChat, the first vision-language assistant for temporal earth observation data (ICLR 2025). - ermongroup/TEOChat

Vision-language models (VLMs) are revolutionizing how we use Earth observation (EO) data, but none could reason over time—a critical need for applications like disaster relief—until now. Introducing TEOChat 🌍🤖, the first VLM for temporal EO data! arxiv.org/abs/2410.06234 1/8

jeremy_irvin16's tweet image. Vision-language models (VLMs) are revolutionizing how we use Earth observation (EO) data, but none could reason over time—a critical need for applications like disaster relief—until now.

Introducing TEOChat 🌍🤖, the first VLM for temporal EO data!

arxiv.org/abs/2410.06234

1/8


Loading...

Something went wrong.


Something went wrong.