Victoria X Lin
@VictoriaLinML
MTS @thinkymachines | MoMa/MoT🖼 • RA-DIT🔍 • Llama4🦙 Ex: @AIatMeta, @SFResearch • PhD @uwcse 📜 http://threads.net/@v.linspiration 🌴 Bay Area
You might like
1/n Introducing MoMa 🖼, our new sparse early-fusion architecture for mixed-modal language modeling that significantly boosts pre-training efficiency 🚀 (arxiv.org/pdf/2407.21770). MoMa employs a mixture-of-expert (MoE) framework with modality-specific expert groups. Given any…
Think about this talk a lot. There was a time when people were bullish on "feed all the modalities to the LLM," but it didn't really pan out as I would have expected. The discrete / continuous divide remains a interesting challenge in deep learning.
COLM Keynotes: Luke Zettlemoyer Mixed-modal Language Modeling youtu.be/PdsKNtEofFY
youtube.com
YouTube
Luke Zettlemoyer - Mixed-modal Language Modeling
🤞🤞
Congrats on the move. The "kind, world-class team" part is often underestimated in these announcements. Technical ambition is common enough in AI right now.. but building something genuinely novel requires a team culture that can sustain deep collaboration without burning out.…
Very interesting read ☕ When poking different frontier models (e.g., GPT-5 vs Gemini), I’ve often noticed surprising similarity on non-STEM questions. This paper carefully quantified the “inter-model homogeneity” as part of their study — both in terms of embedding similarity and…
⚠️Different models. Same thoughts.⚠️ Today’s AI models converge into an 𝐀𝐫𝐭𝐢𝐟𝐢𝐜𝐢𝐚𝐥 𝐇𝐢𝐯𝐞𝐦𝐢𝐧𝐝 🐝, a striking case of mode collapse that persists even across heterogeneous ensembles. Our #neurips2025 𝐃&𝐁 𝐎𝐫𝐚𝐥 𝐩𝐚𝐩𝐞𝐫 (✨𝐭𝐨𝐩 𝟎.𝟑𝟓%✨) dives deep into…
Today we’re announcing research and teaching grants for Tinker: credits for scholars and students to fine-tune and experiment with open-weight LLMs. Read more and apply at: thinkingmachines.ai/blog/tinker-re…
I'm recruiting PhD students! I'm interested in: 1. Understanding how LLMs 'see' the world (ex: LMs can't see conspicious omissions, see AbsenceBench) 2. How can we make things with LLMs that have never been made before? (ex: Communnication Games, see 📌) 3. See my other posts :)
Several of my team members + myself are impacted by this layoff today. Welcome to connect :)
This is an excellent history of LLMs, doesn't miss seminal papers I know. Reminds you we're standing on the shoulders of giants, and giants are still being born today. gregorygundersen.com/blog/2025/10/0…
Luke Zettlemoyer (@LukeZettlemoyer) plenary talk on scalable architectures for multimodal language modeling #COLM2025 Chameleon: autoregressive multimodal language models -- treat image as tokens -- works but harder to scale -- modality gap seems to be a big problem…
Tinker provides an abstraction layer that is the right one for post-training R&D -- it's the infrastructure I've always wanted. I'm excited to see what people build with it. "Civilization advances by extending the number of important operations which we can perform without…
Introducing Tinker: a flexible API for fine-tuning language models. Write training loops in Python on your laptop; we'll run them on distributed GPUs. Private beta starts today. We can't wait to see what researchers and developers build with cutting-edge open models!…
Tinker brings tools similar to the ones we use internally to the community. It provides a clean, transparent, abstraction that lets researchers write expressive experiments and training pipelines, while we manage the complexities of distributed training and sampling. We hope…
Introducing Tinker: a flexible API for fine-tuning language models. Write training loops in Python on your laptop; we'll run them on distributed GPUs. Private beta starts today. We can't wait to see what researchers and developers build with cutting-edge open models!…
Introducing Tinker: a flexible API for fine-tuning language models. Write training loops in Python on your laptop; we'll run them on distributed GPUs. Private beta starts today. We can't wait to see what researchers and developers build with cutting-edge open models!…
LoRA makes fine-tuning more accessible, but it's unclear how it compares to full fine-tuning. We find that the performance often matches closely---more often than you might expect. In our latest Connectionism post, we share our experimental results and recommendations for LoRA.…
...is today a good day for new paper posts? 🤖Learning to Reason for Factuality 🤖 📝: arxiv.org/abs/2508.05618 - New reward func for GRPO training of long CoTs for *factuality* - Design stops reward hacking by favoring precision, detail AND quality - Improves base model across…
Happy to share that ReasonIR is accepted by @COLM_conf! Synthetic data & test-time scaling are powerful tools to enable new capabilities for challenging tasks. I’m impressed by how quickly smaller retrievers and better rerankers have been developed with ReasonIR data! #COLM2025
Meet ReasonIR-8B✨the first retriever specifically trained for reasoning tasks! Our challenging synthetic training data unlocks SOTA scores on reasoning IR and RAG benchmarks. ReasonIR-8B ranks 1st on BRIGHT and outperforms search engine and retriever baselines on MMLU and GPQA🔥
Some updates 🚨 I finished my Ph.D at @uwcse in June 2025! After a year at AI2 as a Research Scientist, I am joining CMU @LTIatCMU & @mldcmu (courtesy) as an Assistant Professor in Fall 2026. The journey, acknowledgments & recruiting in 🧵
Gorgeous building! Just learned that both the CDIS building at UW–Madison and the Bill & Melinda Gates Center at U Washington are by the same architects — @LMNArchitects. 🏨 UW-Madison: lmnarchitects.com/project/comput… 🏨 U Washington: lmnarchitects.com/project/bill-m…
lmnarchitects.com
Bill & Melinda Gates Center for Computer Science & Engineering University of Washington - LMN...
-2789
My students called the new CDIS building “state-of-the-art”. I thought they were exaggerating. Today I moved in and saw it for myself. Wow. Photos cannot capture the beauty of the design.
OK, @sarawiltberger and I are experimenting with a small, project-based mentorship program designed for the age of AI. We’re looking for resourceful self-starters—from early high school to early-career professionals—who want to prove their abilities through hard work. You don’t…
I've been reflecting deeply on how the rapid AI revolution is reshaping education, employment, and entrepreneurship. I want to help ambitious, talented individuals—whether high schoolers, PhDs, skilled professionals, or entrepreneurs outside AI—to thrive during this transition.…
United States Trends
- 1. Marshawn Kneeland 33.1K posts
- 2. Nancy Pelosi 41.9K posts
- 3. #MichaelMovie 50.2K posts
- 4. Craig Stammen N/A
- 5. #NO1ShinesLikeHongjoong 32.8K posts
- 6. #영원한_넘버원캡틴쭝_생일 32.2K posts
- 7. Baxcalibur 4,860 posts
- 8. ESPN Bet 2,753 posts
- 9. Gremlins 3 3,912 posts
- 10. Chimecho 7,063 posts
- 11. Joe Dante N/A
- 12. Dallas Cowboys 12.7K posts
- 13. Chris Columbus 3,495 posts
- 14. #LosdeSiemprePorelNO N/A
- 15. #thursdayvibes 3,412 posts
- 16. Jaafar 14.7K posts
- 17. VOTAR NO 27.2K posts
- 18. Pujols N/A
- 19. Unplanned 8,477 posts
- 20. She's 85 1,096 posts
You might like
-
Sewon Min
@sewon__min -
Caiming Xiong
@CaimingXiong -
Zhou Yu
@Zhou_Yu_AI -
Luke Zettlemoyer
@LukeZettlemoyer -
Victor Zhong
@hllo_wrld -
Hanna Hajishirzi
@HannaHajishirzi -
Sean Ren
@xiangrenNLP -
Akari Asai
@AkariAsai -
Wei Xu
@cocoweixu -
Kai-Wei Chang
@kaiwei_chang -
Sachin Gururangan
@ssgrn -
Mohit Bansal
@mohitban47 -
Jessy Li
@jessyjli -
Yu Su
@ysu_nlp -
Huan Sun
@hhsun1
Something went wrong.
Something went wrong.