1jaskiratsingh's profile picture. Ph.D. Candidate at Australian National University | Intern @AIatMeta
 GenAI |  @AdobeResearch |  Multimodal Fusion Models and Agents | R2E-Gym | REPA-E

Jaskirat Singh @ ICCV2025🌴

@1jaskiratsingh

Ph.D. Candidate at Australian National University | Intern @AIatMeta GenAI | @AdobeResearch | Multimodal Fusion Models and Agents | R2E-Gym | REPA-E

Ghim

Can we optimize both the VAE tokenizer and diffusion model together in an end-to-end manner? Short Answer: Yes. 🚨 Introducing REPA-E: the first end-to-end tuning approach for jointly optimizing both the VAE and the latent diffusion model using REPA loss 🚨 Key Idea: 🧠…

1jaskiratsingh's tweet image. Can we optimize both the VAE tokenizer and diffusion model together in an end-to-end manner? Short Answer: Yes.

🚨 Introducing REPA-E: the first end-to-end tuning approach for jointly optimizing both the VAE and the latent diffusion model using REPA loss 🚨

Key Idea:
🧠…

Jaskirat Singh @ ICCV2025🌴 đã đăng lại

[Videos are entanglements of space and time.] Around one year ago, we released VSI-Bench, in which we studied visual spatial intelligence: a fundamental but missing pillar of current MLLMs. Today, we are excited to introduce Cambrian-S, our further step that goes beyond visual…

Introducing Cambrian-S it’s a position, a dataset, a benchmark, and a model but above all, it represents our first steps toward exploring spatial supersensing in video. 🧶



Jaskirat Singh @ ICCV2025🌴 đã đăng lại

Can LLMs accurately aggregate information over long, information-dense texts? Not yet… We introduce Oolong, a dataset of simple-to-verify information aggregation questions over long inputs. No model achieves >50% accuracy at 128K on Oolong!

abertsch72's tweet image. Can LLMs accurately aggregate information over long, information-dense texts? Not yet…

We introduce Oolong, a dataset of simple-to-verify information aggregation questions over long inputs. No model achieves >50% accuracy at 128K on Oolong!

Jaskirat Singh @ ICCV2025🌴 đã đăng lại

Introducing Cambrian-S it’s a position, a dataset, a benchmark, and a model but above all, it represents our first steps toward exploring spatial supersensing in video. 🧶


Jaskirat Singh @ ICCV2025🌴 đã đăng lại

It’s an honor to have received the @QEPrize along with my fellow laureates! But it’s also a responsibility. AI’s impact to humanity is in the hands of all of us.

Today, The King presented The Queen Elizabeth Prize for Engineering at St James's Palace, celebrating the innovations which are transforming our world.   🧠 This year’s prize honours seven pioneers whose work has shaped modern artificial intelligence. 🔗 Find out more:…

RoyalFamily's tweet image. Today, The King presented The Queen Elizabeth Prize for Engineering at St James's Palace, celebrating the innovations which are transforming our world.
 
🧠 This year’s prize honours seven pioneers whose work has shaped modern artificial intelligence.

🔗 Find out more:…


Jaskirat Singh @ ICCV2025🌴 đã đăng lại

you can’t build superintelligence without first building supersensing


Jaskirat Singh @ ICCV2025🌴 đã đăng lại

New eval! Code duels for LMs ⚔️ Current evals test LMs on *tasks*: "fix this bug," "write a test" But we code to achieve *goals*: maximize revenue, cut costs, win users Meet CodeClash: LMs compete via their codebases across multi-round tournaments to achieve high-level goals


Jaskirat Singh @ ICCV2025🌴 đã đăng lại

Check out our work ThinkMorph, which thinks in multi-modalities, not just with them.

🚨Sensational title alert: we may have cracked the code to true multimodal reasoning. Meet ThinkMorph — thinking in modalities, not just with them. And what we found was... unexpected. 👀 Emergent intelligence, strong gains, and …🫣 🧵 arxiv.org/abs/2510.27492 (1/16)

Kuvvius's tweet image. 🚨Sensational title alert: we may have cracked the code to true multimodal reasoning.
Meet ThinkMorph — thinking in modalities, not just with them.
And what we found was... unexpected. 👀
Emergent intelligence, strong gains, and …🫣
🧵 arxiv.org/abs/2510.27492
(1/16)


Jaskirat Singh @ ICCV2025🌴 đã đăng lại

Tests certify functional behavior; they don’t judge intent. GSO, our code optimization benchmark, now combines tests with a rubric-driven HackDetector to identify models that game the benchmark. We found that up to 30% of a model’s attempts are non-idiomatic reward hacks, which…

slimshetty_'s tweet image. Tests certify functional behavior; they don’t judge intent. GSO, our code optimization benchmark, now combines tests with a rubric-driven HackDetector to identify models that game the benchmark.

We found that up to 30% of a model’s attempts are non-idiomatic reward hacks, which…

Jaskirat Singh @ ICCV2025🌴 đã đăng lại

We added LLM judge based hack detector to our code optimization evals and found models perform non-idiomatic code changes in upto 30% of the problems 🤯

Tests certify functional behavior; they don’t judge intent. GSO, our code optimization benchmark, now combines tests with a rubric-driven HackDetector to identify models that game the benchmark. We found that up to 30% of a model’s attempts are non-idiomatic reward hacks, which…

slimshetty_'s tweet image. Tests certify functional behavior; they don’t judge intent. GSO, our code optimization benchmark, now combines tests with a rubric-driven HackDetector to identify models that game the benchmark.

We found that up to 30% of a model’s attempts are non-idiomatic reward hacks, which…


Jaskirat Singh @ ICCV2025🌴 đã đăng lại

end-to-end training just makes latent diffusion transformers better! with repa-e, we showed the power of end-to-end training on imagenet. today we are extending it to text-to-image (T2I) generation. #ICCV2025 🌴 🚨 Introducing "REPA-E for T2I: family of end-to-end tuned VAEs for…

1jaskiratsingh's tweet image. end-to-end training just makes latent diffusion transformers better! with repa-e, we showed the power of end-to-end training on imagenet. today we are extending it to text-to-image (T2I) generation. #ICCV2025 🌴

🚨 Introducing "REPA-E for T2I: family of end-to-end tuned VAEs for…

Jaskirat Singh @ ICCV2025🌴 đã đăng lại

With simple changes, I was able to cut down @krea_ai's new real-time video gen's timing from 25.54s to 18.14s 🔥🚀 1. FA3 through `kernels` 2. Regional compilation 3. Selective (FP8) quantization Notes are in 🧵 below


Jaskirat Singh @ ICCV2025🌴 đã đăng lại

Tired to go back to the original papers again and again? Our monograph: a systematic and fundamental recipe you can rely on! 📘 We’re excited to release 《The Principles of Diffusion Models》— with @DrYangSong, @gimdong58085414, @mittu1204, and @StefanoErmon. It traces the core…

JCJesseLai's tweet image. Tired to go back to the original papers again and again? Our monograph: a systematic and fundamental recipe you can rely on!

📘 We’re excited to release 《The Principles of Diffusion Models》— with @DrYangSong, @gimdong58085414, @mittu1204, and @StefanoErmon.

It traces the core…

Jaskirat Singh @ ICCV2025🌴 đã đăng lại

Back in 2024, LMMs-Eval built a complete evaluation ecosystem for the MLLM/LMM community, with countless researchers contributing their models and benchmarks to raise the whole edifice. I was fortunate to be one of them: our series of video-LMM works (MovieChat, AuroraCap, VDC)…

Throughout my journey in developing multimodal models, I’ve always wanted a framework that lets me plug & play modality encoders/decoders on top of an auto-regressive LLM. I want to prototype fast, try new architectures, and have my demo files scale effortlessly — with full…



Jaskirat Singh @ ICCV2025🌴 đã đăng lại

I have one PhD intern opening to do research as a part of a model training effort at the FAIR CodeGen team (latest: Code World Model). If interested, email me directly and apply at metacareers.com/jobs/214557081…


United States Xu hướng

Loading...

Something went wrong.


Something went wrong.