Jaskirat Singh @ ICCV2025🌴
@1jaskiratsingh
Ph.D. Candidate at Australian National University | Intern @AIatMeta GenAI | @AdobeResearch | Multimodal Fusion Models and Agents | R2E-Gym | REPA-E
You might like
Can we optimize both the VAE tokenizer and diffusion model together in an end-to-end manner? Short Answer: Yes. 🚨 Introducing REPA-E: the first end-to-end tuning approach for jointly optimizing both the VAE and the latent diffusion model using REPA loss 🚨 Key Idea: 🧠…
Check out our work ThinkMorph, which thinks in multi-modalities, not just with them.
🚨Sensational title alert: we may have cracked the code to true multimodal reasoning. Meet ThinkMorph — thinking in modalities, not just with them. And what we found was... unexpected. 👀 Emergent intelligence, strong gains, and …🫣 🧵 arxiv.org/abs/2510.27492 (1/16)
Tests certify functional behavior; they don’t judge intent. GSO, our code optimization benchmark, now combines tests with a rubric-driven HackDetector to identify models that game the benchmark. We found that up to 30% of a model’s attempts are non-idiomatic reward hacks, which…
We added LLM judge based hack detector to our code optimization evals and found models perform non-idiomatic code changes in upto 30% of the problems 🤯
Tests certify functional behavior; they don’t judge intent. GSO, our code optimization benchmark, now combines tests with a rubric-driven HackDetector to identify models that game the benchmark. We found that up to 30% of a model’s attempts are non-idiomatic reward hacks, which…
end-to-end training just makes latent diffusion transformers better! with repa-e, we showed the power of end-to-end training on imagenet. today we are extending it to text-to-image (T2I) generation. #ICCV2025 🌴 🚨 Introducing "REPA-E for T2I: family of end-to-end tuned VAEs for…
With simple changes, I was able to cut down @krea_ai's new real-time video gen's timing from 25.54s to 18.14s 🔥🚀 1. FA3 through `kernels` 2. Regional compilation 3. Selective (FP8) quantization Notes are in 🧵 below
Tired to go back to the original papers again and again? Our monograph: a systematic and fundamental recipe you can rely on! 📘 We’re excited to release 《The Principles of Diffusion Models》— with @DrYangSong, @gimdong58085414, @mittu1204, and @StefanoErmon. It traces the core…
Back in 2024, LMMs-Eval built a complete evaluation ecosystem for the MLLM/LMM community, with countless researchers contributing their models and benchmarks to raise the whole edifice. I was fortunate to be one of them: our series of video-LMM works (MovieChat, AuroraCap, VDC)…
Throughout my journey in developing multimodal models, I’ve always wanted a framework that lets me plug & play modality encoders/decoders on top of an auto-regressive LLM. I want to prototype fast, try new architectures, and have my demo files scale effortlessly — with full…
I have one PhD intern opening to do research as a part of a model training effort at the FAIR CodeGen team (latest: Code World Model). If interested, email me directly and apply at metacareers.com/jobs/214557081…
Arash and his team are fantastic! I highly recommend applying if you’re interested
📢 The Fundamental Generative AI Research (GenAIR) team at NVIDIA is looking for outstanding candidates to join us as summer 2026 interns. Apply via: nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAEx… Email: [email protected] Group website: research.nvidia.com/labs/genair/ 👇
🚀 New preprint! We present NP-Edit, a framework for training an image editing diffusion model without paired supervision. We use differentiable feedback from Vision-Language Models (VLMs) combined with distribution-matching loss (DMD) to learn editing directly. webpage:…
I am incredibly excited to introduce rLLM v0.2. Zooming back to a year ago: @OpenAI's o1-preview just dropped, and RL + test-time scaling suddenly became the hype. But no one knew how they did it. @kylepmont and I had this idea - what if we built a solver-critique loop for…
🚀 Introducing rLLM v0.2 - train arbitrary agentic programs with RL, with minimal code changes. Most RL training systems adopt the agent-environment abstraction. But what about complex workflows? Think solver-critique pairs collaborating, or planner agents orchestrating multiple…
LiveCodeBench Pro remains one of the most challenging code benchmarks, but its evaluation and verification process is still a black box. We introduce AutoCode, which democratizes evaluation allowing anyone to locally run verification and perform RL training! For the first time,…
United States Trends
- 1. New York 21.8K posts
- 2. New York 21.8K posts
- 3. $TAPIR 1,659 posts
- 4. Virginia 525K posts
- 5. Texas 221K posts
- 6. Prop 50 181K posts
- 7. #DWTS 40.9K posts
- 8. Clippers 9,576 posts
- 9. Cuomo 411K posts
- 10. TURN THE VOLUME UP 19.3K posts
- 11. Harden 9,907 posts
- 12. Ty Lue N/A
- 13. Jay Jones 101K posts
- 14. Van Jones 2,336 posts
- 15. Bulls 36.5K posts
- 16. #Election2025 16.2K posts
- 17. Embiid 6,146 posts
- 18. Sixers 13K posts
- 19. WOKE IS BACK 36.5K posts
- 20. Isaiah Joe N/A
You might like
-
Thalaiyasingam Ajanthan
@tha_ajanthan -
Dongxu Li
@DongxuLi_ -
Jiahao Zhang
@DavidZhang9873 -
Chamin Hewa Koneputugodage
@ChaminHewa -
Dylan Campbell
@dylanjcampbell_ -
Jiahui Zhang
@JiahuiZhang__32 -
Sameera Ramasinghe
@SameeraRamasin1 -
Lei Ke
@leike_lk -
Liang Zheng
@LiangZheng_06 -
Patrick Ramos
@patrick_j_ramos -
Jimmy Yoonwoo Jeong
@yoonwoojeong -
Junlin (Hans) Han
@han_junlin
Something went wrong.
Something went wrong.