hyperparticle's profile picture. Research Scientist working on Video Generation @LumaLabsAI. Prev. #VideoPoet @GoogleAI. I'm a developer that enjoys solving puzzles, one piece at a time.

Dan Kondratyuk

@hyperparticle

Research Scientist working on Video Generation @LumaLabsAI. Prev. #VideoPoet @GoogleAI. I'm a developer that enjoys solving puzzles, one piece at a time.

고정된 트윗

Today we are launching Dream Machine, our first AI model that generates cinematic and fluid videos from text instructions and images. I generated this 1-minute 60 fps video entirely from our model. Try Dream Machine → lumalabs.ai/dream-machine Join us → lumalabs.ai/join


It took an incredible amount of energy to get here, but now we're ready to unleash Ray3, our new frontier video model with reasoning capabilities. I especially love the HDR video generations, the colors and lighting just pop in ways that make SDR look dull. Check it out!

This is Ray3. The world’s first reasoning video model, and the first to generate studio-grade HDR. Now with an all-new Draft Mode for rapid iteration in creative workflows, and state of the art physics and consistency. Available now for free in Dream Machine.



I see they went to the Intel/Nvidia school of cooking charts

what in the chart crime

EMostaque's tweet image. what in the chart crime


For those that ever wondered how video generation works, this video is fantastic look into how these models operate from a geometric perspective

New video on the details of diffusion models: youtu.be/iv-5mZ_9CPY Produced by @welchlabs, this is the first in a small series of 3b1b this summer. I enjoyed providing editorial feedback throughout the last several months, and couldn't be happier with the result.

3blue1brown's tweet card. But how do AI images and videos actually work? | Guest video by Welch...

youtube.com

YouTube

But how do AI images and videos actually work? | Guest video by Welch...



Dan Kondratyuk 님이 재게시함

Introducing Modify Video. Reimagine any video. Shoot it in post with director-grade control over style, character, and setting. Restyle expressive performances, swap entire worlds, or redesign the frame to your vision. Shoot once. Shape infinitely.


Anyone out there using LLMs/Cursor to build ML code effectively? Looking for helpful tips and tricks to write Pytorch code faster.


Dan Kondratyuk 님이 재게시함

Even I am astounded at these results! Using @LumaLabsAI Dream Machine Camera controls with Ray2 text-to-video. And I'll teach anyone interested in learning how I do it over the coming months (it's not hard) . Extends up to 30 seconds in this!


Dan Kondratyuk 님이 재게시함

3D Chalk Art – A New Perspective Step into the illusion with #DreamMachine. Where flat images become dimensional scenes. Powered by #Ray2 Camera Motion Concepts.


A fun test of what's possible with camera control. Generated with Ray2 flash.


Dan Kondratyuk 님이 재게시함

Camera Controls for @LumaLabsAI Ray2 AI video is out now and IT IS GLORIOUS! First impressions: Just look at these results! Are you not entertained!?


This particular release has me excited. I've been trying out new camera motions with Ray2 in Dream Machine and it made it so much more fun to use.

Introducing #Ray2 Camera Motion Concepts in #DreamMachine — 20+ precision-tuned camera motions designed for smooth cinematic control and great reliability. Concepts compose with each other making hundreds of impossible new camera moves possible. Available now.



Today, we release Inductive Moment Matching (IMM): a new pre-training paradigm breaking the algorithmic ceiling of diffusion models. Higher sample quality. 10x more efficient. Single-stage, single network, stable training. Read more: lumalabs.ai/news/imm



Loading...

Something went wrong.


Something went wrong.