lorebasile's profile picture. Postdoc @AreaSciencePark

Lorenzo Basile

@lorebasile

Postdoc @AreaSciencePark

Lorenzo Basile reposted

🤖Seminar Series Next week, Giorgos Nikolaou and Tommaso Mencattini (@EPFL) will present "Language Models are Injective and Hence Invertible". @GiorgosNik02 @tommaso_mncttn 📅Nov 27, 11 CET (Online) 👉Register to attend: bit.ly/4ieKRYq

AreaSciencePark's tweet image. 🤖Seminar Series 
Next week, Giorgos Nikolaou and Tommaso Mencattini (@EPFL) will present "Language Models are Injective and Hence Invertible". @GiorgosNik02 @tommaso_mncttn
📅Nov 27, 11 CET (Online) 
👉Register to attend: bit.ly/4ieKRYq

Lorenzo Basile reposted

Excited to share that 2/2 papers from our Lab (LADE) were accepted to #NeurIPS2025 (one spotlight 🎉) Great work from all the students and collaborators involved! @AreaSciencePark @aleserra1998 @lorebasile @francescortu @lucrevaleriani @DiegoDoimo @ansuin @FrancescoLocat8

albecazzaniga's tweet image. Excited to share that 2/2 papers from our Lab (LADE) were accepted to #NeurIPS2025 (one spotlight 🎉)

Great work from all the students and collaborators involved!

@AreaSciencePark @aleserra1998 @lorebasile @francescortu @lucrevaleriani @DiegoDoimo @ansuin  @FrancescoLocat8

Lorenzo Basile reposted

Thrilled to announce that our paper is accepted at #NeurIPS 2025!! See you in San Diego! 🇺🇸

🚨🚨 Excited to share our latest paper, now on @arxiv! 🖼️ We studied how unified VLMs, trained to generate both text and images (e.g., @MetaAI's Chameleon), exchange information between modalities, comparing them to standard VLMs. Deep dive:👇

francescortu's tweet image. 🚨🚨 Excited to share our latest paper, now on @arxiv!

🖼️ We studied how unified VLMs, trained to generate both text and images (e.g., @MetaAI's Chameleon), exchange information between modalities, comparing them to standard VLMs.

Deep dive:👇


Lorenzo Basile reposted

🚨🚨 Excited to share our latest paper, now on @arxiv! 🖼️ We studied how unified VLMs, trained to generate both text and images (e.g., @MetaAI's Chameleon), exchange information between modalities, comparing them to standard VLMs. Deep dive:👇

francescortu's tweet image. 🚨🚨 Excited to share our latest paper, now on @arxiv!

🖼️ We studied how unified VLMs, trained to generate both text and images (e.g., @MetaAI's Chameleon), exchange information between modalities, comparing them to standard VLMs.

Deep dive:👇

Lorenzo Basile reposted

I just landed in Vancouver to present @NeurIPSConf the findings of our new work! Few-shot learning and fine-tuning change the hidden layers of LLMs in a dramatically different way, even when they perform equally well on multiple-choice question-answering tasks. 🧵1/6

DiegoDoimo's tweet image. I just landed in Vancouver to present @NeurIPSConf the findings of our new work!

Few-shot learning and fine-tuning change the hidden layers of LLMs in a dramatically different way, even when they perform equally well on multiple-choice question-answering tasks.
🧵1/6

Lorenzo Basile reposted

✨ Meet #ResiDual, a novel perspective on the alignment of multimodal latent spaces! Think of it as a spectral "panning for gold" along the residual stream. It improves text-image alignment by simply amplifying task-related directions! 🌌🔍 arxiv.org/abs/2411.00246 [1/6]

ValeMaiorca's tweet image. ✨ Meet #ResiDual, a novel perspective on the alignment of multimodal latent spaces! 

Think of it as a spectral "panning for gold" along the residual stream. It improves text-image alignment by simply amplifying task-related directions! 🌌🔍 

arxiv.org/abs/2411.00246

[1/6]

Loading...

Something went wrong.


Something went wrong.