FunAILab's profile picture. Research lab led by @y_m_asano at @utn_nuremberg. We conduct fundamental AI research and develop core technology for future Foundation Models.

FunAI

@FunAILab

Research lab led by @y_m_asano at @utn_nuremberg. We conduct fundamental AI research and develop core technology for future Foundation Models.

FunAI repostou

@FunAILab + CVMP Lab of @EddyIlg retreat: ☑. From mountains to hackathon to good food, we've had some intense but good days with lots of new ideas 🎉.

y_m_asano's tweet image. @FunAILab + CVMP Lab of @EddyIlg retreat: ☑.
From mountains to hackathon to good food, we've had some intense but good days with lots of new ideas 🎉.
y_m_asano's tweet image. @FunAILab + CVMP Lab of @EddyIlg retreat: ☑.
From mountains to hackathon to good food, we've had some intense but good days with lots of new ideas 🎉.
y_m_asano's tweet image. @FunAILab + CVMP Lab of @EddyIlg retreat: ☑.
From mountains to hackathon to good food, we've had some intense but good days with lots of new ideas 🎉.
y_m_asano's tweet image. @FunAILab + CVMP Lab of @EddyIlg retreat: ☑.
From mountains to hackathon to good food, we've had some intense but good days with lots of new ideas 🎉.

FunAI repostou

Now finally accepted at @emnlpmeeting! I think the technique and high-level ideas i) allow bidirectional attention for prompt & ii) (maybe) process input-query differently from answer generation will stick around.

Today we introduce Bidirectional Instruction Tuning (Bitune). It's a new way of adapting LLMs for the instruction->answering stage. It allows the model to process the instruction/question with bidirectional attention, while the answer generation remains causal.

y_m_asano's tweet image. Today we introduce Bidirectional Instruction Tuning (Bitune). It's a new way of adapting LLMs for the instruction->answering stage.

It allows the model to process the instruction/question with bidirectional attention, while the answer generation remains causal.


FunAI repostou

Today we release Franca, a new vision Foundation Model that matches and sometimes outperforms DINOv2. The data, the training code and the model weights (with intermediate checkpoints) are open-source, allowing everyone to build on this. Methodologically, we introduce two new…

Can open-data models beat DINOv2? Today we release Franca, a fully open-sourced vision foundation model. Franca with ViT-G backbone matches (and often beats) proprietary models like SigLIPv2, CLIP, DINOv2 on various benchmarks setting a new standard for open-source research🧵

shawshank_v's tweet image. Can open-data models beat DINOv2? Today we release Franca, a fully open-sourced vision foundation model. Franca with ViT-G backbone matches (and often beats) proprietary models like SigLIPv2, CLIP, DINOv2 on various benchmarks setting a new standard for open-source research🧵


FunAI repostou

Hello FunAI Lab at UTN 👋 I’m excited to start a new chapter of my research journey here in Nuremberg as a visiting postdoc. Excited for inspiring collaborations and impactful research ahead with @y_m_asano and the amazing students😀

FragileGoodwill's tweet image. Hello FunAI Lab at UTN 👋

I’m excited to start a new chapter of my research journey here in Nuremberg as a visiting postdoc.

Excited for inspiring collaborations and impactful research ahead with @y_m_asano and the amazing students😀
FragileGoodwill's tweet image. Hello FunAI Lab at UTN 👋

I’m excited to start a new chapter of my research journey here in Nuremberg as a visiting postdoc.

Excited for inspiring collaborations and impactful research ahead with @y_m_asano and the amazing students😀

FunAI repostou

LoRA et al. enable personalised model generation and serving, which is crucial as finetuned models still outperform general ones in many tasks. However, serving a base model with many LoRAs is very inefficient! Now, there's a better way: enter Prompt Generation Networks,…


FunAI repostou

Is the community trying to surprise us today? 🤯 Because these benchmark-related papers from different research labs all dropped on the Daily Papers page at once! 🎉📑hf.co/papers ✨ LOKI: A Comprehensive Synthetic Data Detection Benchmark using Large Multimodal…


FunAI repostou

Today, we're introducing TVBench! 📹💬 Video-language evaluation is crucial, but are we doing it right? We find that current benchmarks fall short in testing temporal understanding. 🧵👇


First paper with our FunAI Lab affiliation :)

Ever wondered if better LLMs actually have a better understanding of the visual world? 🤔 As it turns out, they do! We find: An LLM's MMLU performance correlates positively with zero-shot performance in a CLIP-like case when using that LLM to encode the text. 🧵👇

y_m_asano's tweet image. Ever wondered if better LLMs actually have a better understanding of the visual world? 🤔
As it turns out, they do!
We find: An LLM's MMLU performance  correlates positively with zero-shot performance in a CLIP-like case when using that LLM to encode the text.
🧵👇


United States Tendências

Loading...

Something went wrong.


Something went wrong.