Arif Ahmad
@arif_ahmad_py
We are in the world model era now. Prev. @GoogleDeepMind and @Nvidia
Dit vind je misschien leuk
Scalable GANs with Transformers arxiv.org/abs/2509.24935… hse1032.github.io/GAT authors train latent-space transformer GANs up to XL/2 scale, and report SotA 1-step class-conditional image generation results on ImageNet-256 after 40 epochs (*with REPA in discriminator)
imagine losing your job to this
“AI is evil” Meanwhile, ChatGPT:
#K2Think (🏔️💭) is now live. We're proud of this model that punches well above its weights, developed primarily for mathematical reasoning but has shown itself to be quite versatile. As a fully deployed reasoning system at k2think.ai you can test it for yourself!
Introducing K2 Think - a breakthrough in advanced AI reasoning. Developed by MBZUAI’s Institute of Foundation Models and @G42ai, K2 Think delivers frontier reasoning performance at a fraction of the size of today’s largest systems. Smaller. Smarter. Open to the world.…
📢Excited to share that I’ve joined @MBZUAI as an Assistant Professor of Computer Vision this fall! If you’re interested in CV4Science: building the next generation of foundation models & discovery tools for science, consider applying to MBZUAI. I’ll be recruiting PhD students!
ChatGPT for helping the Swedish Prime Minister:
Swedish Prime Minister is using AI models "quite often" at his job. He says he uses it get a "second opinion" and asks questions such as "what have others done?" At the moment he is not uploading any documents. IMO, when these models are capable of giving seemingly better…
🚀 Excited to share that I’ve joined the amazing team at @SkildAI! I’m already blown away by the energy, the quality of work, and the level of ambition here. The mission, vision, and results speak for themselves. This is just the beginning. More to come soon! 👇
Modern AI is confined to the digital world. At Skild AI, we are building towards AGI for the real world, unconstrained by robot type or task — a single, omni-bodied brain. Today, we are sharing our journey, starting with early milestones, with more to come in the weeks ahead.…
I have been long arguing that a world model is NOT about generating videos, but IS about simulating all possibilities of the world to serve as a sandbox for general-purpose reasoning via thought-experiments. This paper proposes an architecture toward that arxiv.org/abs/2507.05169
Some critical reviews and clarifications on different perspectives of world models. 🔥🌶️ Stay tuned for more on PAN — its position on the roadmap towards next-level intelligence, strong results, and open-sources❗️🧠
I have been long arguing that a world model is NOT about generating videos, but IS about simulating all possibilities of the world to serve as a sandbox for general-purpose reasoning via thought-experiments. This paper proposes an architecture toward that arxiv.org/abs/2507.05169
How about AI video _game_ generation 🕹️
Classic Unity/Unreal = hand-built assets + rigid solvers; neural game engines like Mirage can run interactive sandboxes from a prompt. 1993 GPUs solved graphics; physics and world assets stayed on the CPU. 2025 GPUs run giant diffusion models that have learnt the geometry,…
It's actually *playable* 🤩 We got GTA AI before GTA 6. Try it out here👇
As ICML 2025 is approaching, it's time to reheat this banger
VLMs are often used for planning across different world modelling scenarios. Check out this recent work by @QiyueGao123 which helps highlight some of their limitations and strength.
🤔 Have @OpenAI o3, Gemini 2.5, Claude 3.7 formed an internal world model to understand the physical world, or just align pixels with words? We introduce WM-ABench, the first systematic evaluation of VLMs as world models. Using a cognitively-inspired framework, we test 15 SOTA…
🤔 Have @OpenAI o3, Gemini 2.5, Claude 3.7 formed an internal world model to understand the physical world, or just align pixels with words? We introduce WM-ABench, the first systematic evaluation of VLMs as world models. Using a cognitively-inspired framework, we test 15 SOTA…
NeurIPS D&B track in a nutshell: (1) An LLM-generated benchmark dataset (2) used to test performance of LLMs (3) evaluated via LLM-as-a-judge
Sadly this is often the way these things work - contributions from small independent researchers get lost in the noise of big tech companies and prestigious universities.
I received a review like this five years ago. It’s probably the right time now to share it with everyone who wrote or got random discouraging reviews from ICML/ACL.
United States Trends
- 1. Austin Reaves 59.5K posts
- 2. #LakeShow 3,412 posts
- 3. Trey Yesavage 40.1K posts
- 4. jungkook 555K posts
- 5. Jake LaRavia 7,297 posts
- 6. Jeremy Lin 1,094 posts
- 7. #LoveIsBlind 4,858 posts
- 8. Happy Birthday Kat N/A
- 9. KitKat 20.5K posts
- 10. jungwoo 124K posts
- 11. doyoung 89.3K posts
- 12. Blue Jays 63.1K posts
- 13. Rudy Gobert 1,528 posts
- 14. #PokemonTCGPocket 2,808 posts
- 15. Kacie 1,969 posts
- 16. #Lakers 1,247 posts
- 17. #SellingSunset 4,095 posts
- 18. Dodgers in 7 1,723 posts
- 19. Pelicans 4,693 posts
- 20. Walt 6,376 posts
Dit vind je misschien leuk
-
Kai
@Kai95691120 -
Anon Ymous
@proxymoron1 -
Half Charged Blog
@halfchargedblog -
Coventry University Physiotherapy
@CovPhysioStaff -
Kerby
@kearbykearby -
Shashank Kirtania
@5hv5hvnk -
Lion
@Ronny33139941 -
Mario ramon
@bigramon4 -
Ben & Aurora
@BenandAurora -
Anoop Kunchukuttan
@anoopk -
sbab
@sbabbage93 -
Faraon Rayado
@ElFaraon400 -
Ricardo Rei
@RicardoRei7 -
Shivang Dubey
@subtleshivang -
luzbelito 🇦🇷🇪🇸🏳️⚧️
@AritheirishcaT
Something went wrong.
Something went wrong.