seleniumlzh's profile picture. U of Toronto CS PhD at DGP | Prev. @AdobeResearch @NVIDIA : )

Selena Ling 凌子涵

@seleniumlzh

U of Toronto CS PhD at DGP | Prev. @AdobeResearch @NVIDIA : )

置頂

Our #Siggraph25 work found a simple, nearly one-line change that greatly eases neural field optimization for a wide variety of existing representations. “Stochastic Preconditioning for Neural Field Optimization” w/ @merlin_ND @_AlecJacobson @nmwsharp

seleniumlzh's tweet image. Our #Siggraph25 work found a simple, nearly one-line change that greatly eases neural field optimization for a wide variety of existing representations.

“Stochastic Preconditioning for Neural Field Optimization” w/ @merlin_ND @_AlecJacobson @nmwsharp

Selena Ling 凌子涵 已轉發

📢 Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation Got only one or a few images and wondering if recovering the 3D environment is a reconstruction or generation problem? Why not do it with a generative reconstruction model! We show that a…


Selena Ling 凌子涵 已轉發

📢📢📢 3D Gaussian Flats @ #NeurIPS2025 A hybrid 2D/3D representation that reconstructs photorealistic scenes. 🔗 Project: theialab.github.io/3dgs-flats 📄 ArXiv: arxiv.org/abs/2509.16423 💻 Code: github.com/theialab/3dgs-…


Selena Ling 凌子涵 已轉發

Season 1 of Toronto School of Foundation Modelling kicks off this Thursday at New Stadium!!! 60 people will be attending weekly sessions for 3 months, learning to build Foundation Models from scratch. Around 10 guest speakers (more to come) will be flying to Toronto to talk…

Continuing to press forward with the range and depth of learning opportunities. This week we have several workshops, meet-ups, a deep-dive seminar, the beginning of a new lecture series, as well as an exhibit happening at New Stadium. Links below.

newsystems_'s tweet image. Continuing to press forward with the range and depth of learning opportunities.

This week we have several workshops, meet-ups, a deep-dive seminar, the beginning of a new lecture series, as well as an exhibit happening at New Stadium.

Links below.


Selena Ling 凌子涵 已轉發

I’m excited to announce that our paper, “Learning Riemannian Metrics for Interpolating Animations,” has been accepted to GSI 2025! 🧵 co-authored with @vm2358 at @UofT and @ninamiolane at the @UCSB @geometric_intel lab! gi.ece.ucsb.edu/node/217

helloyesimsarah's tweet image. I’m excited to announce that our paper, “Learning Riemannian Metrics for Interpolating Animations,” has been accepted to GSI 2025! 🧵
co-authored with @vm2358 at @UofT and @ninamiolane  at the @UCSB @geometric_intel lab!
gi.ece.ucsb.edu/node/217

😍

Every lens leaves a blur signature—a hidden fingerprint in every photo. In our new #TPAMI paper, we show how to learn it fast (5 mins of capture!) with Lens Blur Fields ✨ With it, we can tell apart ‘identical’ phones by their optics, deblur images, and render realistic blurs.

estheroate's tweet image. Every lens leaves a blur signature—a hidden fingerprint in every photo.

In our new #TPAMI paper, we show how to learn it fast (5 mins of capture!) with Lens Blur Fields ✨

With it, we can tell apart ‘identical’ phones by their optics, deblur images, and render realistic blurs.


Selena Ling 凌子涵 已轉發

“Everyone knows” what an autoencoder is… but there's an important complementary picture missing from most introductory material. In short: we emphasize how autoencoders are implemented—but not always what they represent (and some of the implications of that representation).🧵

keenanisalive's tweet image. “Everyone knows” what an autoencoder is… but there's an important complementary picture missing from most introductory material.

In short: we emphasize how autoencoders are implemented—but not always what they represent (and some of the implications of that representation).🧵

Selena Ling 凌子涵 已轉發

Check out our new paper on robust motion segmentation! Wanna run your SfM pipeline on dynamic scenes? Consider using our RoMo masks to get improvements!! 🚀

📢📢📢 RoMo: Robust Motion Segmentation Improves Structure from Motion romosfm.github.io arxiv.org/pdf/2411.18650 TL;DR: boost your SfM pipeline on dynamic scenes. We use epipolar cues + SAMv2 features to find robust masks for moving objects in a zero-shot manner. 🧵👇



Selena Ling 凌子涵 已轉發

For folks in the @siggraph community: You may or may not be aware of the controversy around the next #SIGGRAPHAsia location, summarized here: cs.toronto.edu/~jacobson/webl… If you're concerned, consider signing this letter: docs.google.com/document/d/1ZS… via this form docs.google.com/forms/d/e/1FAI…

keenanisalive's tweet image. For folks in the @siggraph community:

You may or may not be aware of the controversy around the next #SIGGRAPHAsia location, summarized here: cs.toronto.edu/~jacobson/webl…

If you're concerned, consider signing this letter: docs.google.com/document/d/1ZS…
via this form
docs.google.com/forms/d/e/1FAI…

Selena Ling 凌子涵 已轉發

Total Pixel Space, which won the Grand Prix at this year's AIFF, is a wonderful video essay and, by the way, one of the clearest descriptions of universal simulation (as search in the space of all possible universes) youtube.com/watch?v=zpAeyg…

agermanidis's tweet card. Total Pixel Space

youtube.com

YouTube

Total Pixel Space


Selena Ling 凌子涵 已轉發

Our work was featured by MIT News today! Had so much fun working on this project with Silvia Sellán, Natalia Pacheco-Tallaj and @JustinMSolomon. Can't wait to present it at SIGGRAPH this summer! news.mit.edu/2025/animation…


Selena Ling 凌子涵 已轉發

📢📢📢 Neural Inverse Rendering from Propagating Light 💡 Our CVPR Oral introduces the first method for multiview neural inverse rendering from videos of propagating light, unlocking applications such as relighting light propagation videos, geometry estimation, or light…


Loading...

Something went wrong.


Something went wrong.