Bayesian Methods Research Group
@bayesgroup
Research in Bayesian Deep Learning, Reinforcement Learning, Optimization, Structured Prediction, Drug Discovery and more
คุณอาจชื่นชอบ
1/ Can we efficiently learn the destruction process of diffusion samplers? Can we learn not just the drift, but also the variance for all transition kernels? – We answer YES in our recent paper “Adaptive Destruction Processes for Diffusion Samplers” (Oral at NeurIPS 2025 FPI…
                                            Check out our new paper!
(1/n) The usual assumption in GFlowNet environments is acyclicity. Have you ever wondered if it can be relaxed? Does the existing GFlowNet theory translate to the non-acyclic case? Is efficient training possible? We shed new light on these questions in our latest work! @icmlconf
                                                                            Check out our new work!
🚨 New paper alert! 🚨 Our new paper on ArXiv: "DreamBooth DPO: Controlled Optimization of Personalized Diffusion Models" It addresses the core trade-off in personalized T2I: concept fidelity vs. prompt alignment, without any human-curated data 👉 arxiv.org/abs/2505.20975 1/5
                                                                            Check out our new work!
🚨 New paper alert! 🚨 Our new paper on ArXiv: "ImageReFL: Balancing Quality and Diversity in Human-Aligned Diffusion Models". It tackles a key challenge in diffusion models: aligning with human preferences without collapsing diversity 👉 arxiv.org/abs/2505.22569 1/5
                                                                            1/ GFlowNets are known for training a forward policy to generate complex objects step by step. However, an equally important piece specific to the GFlowNet paradigm is a backward policy, which undoes these steps and plays a crucial role in training.
                                                                            👨💼Neural Flow Diffusion Models at #NeurIPS2024 tomorrow! Discover how to build learnable noising processes for straight-line generative trajectories end-to-end and without simulations!🤯 📍West Ballroom A-D #6809 ⏰Fri 13 Dec 4:30 pm — 7:30 pm 🔗neurips.cc/virtual/2024/p…
                                            🔥 Excited to share our new work on Neural Flow Diffusion Models — a general, end-to-end, simulation-free framework that works with an arbitrary noising processes and even enables learning them! 📜: arxiv.org/abs/2404.12940 🧵 1/11
Check out our new paper! To be presented at #NeurIPS2024 by @KateLobacheva this Friday (poster #2408 / Poster Session 5 East / 13 Dec 11 am – 2 pm PST)
Starting training with a large learning rate benefits generalization—but why? In our new #NeurIPS2024 paper, we investigate its role in navigating the loss landscape and its effect on feature learning! 1/7 Paper: arxiv.org/abs/2410.22113 Poster: nips.cc/virtual/2024/p…
                                                                            Did you know that networks trained with different learning rates extract different features (and a different number of them!) from the data? Come by our poster at HiLD Workshop #ICML2024 tomorrow to discuss it with @irsadrtdinov! Paper: openreview.net/forum?id=IID2D… 1/6
I will be presenting our NeurIPS-2023 paper arxiv.org/abs/2303.03374 at @ml_collective this Friday, March 8, 10am PT / 7pm CET! If you haven't decided yet whether to stay in the pre-train basin or not, you definitely need to see this talk!
                                            Check out our new paper!
🌟 News from the GFlowNet world: our paper “Generative Flow Networks as Entropy-Regularized RL” was honored with oral presentation at #AISTATS2024! Long story short, our result can be described by this picture.
                                                                            At #NeurIPS2023? Come check out our latest work!
Large learning rates improve generalization, but are they all beneficial? The short answer is No, for more details check out our paper at the #NeurIPS2023 Mathematics of Modern Machine Learning (M3L) Workshop! Paper: arxiv.org/abs/2311.11303 1/4
                                                                            Can we improve ensembles in the transfer learning setup by exploring the target task loss landscape? Find out in our new #NeurIPS2023 paper! Joint work with Ildus Sadrtdinov, Dmitrii Pozdeev, and Dmitry Vetrov. Paper: arxiv.org/abs/2303.03374 1/7
                                            Check out our newest paper!
📢 Exciting News! Our paper on StyleDomain for One-shot and Few-shot Domain Adaptation, accepted to ICCV 2023, is out! 📝🔥 📄 Paper Link: arxiv.org/abs/2212.10229 🔗 Source Code: github.com/AIRI-Institute… #ICCV2023 #GANs #StyleDomain #DomainAdaptation 1/N 🧵
                                                                            🎉 Celebrating Dmitry Vetrov & team (Pockonechnyy, @MNakhodnov, Elistratov, @ai_alanov, @MeshchaninovV) for being named #CVPR2023 Outstanding Reviewers. With exceptional reviews, they stood out among 7000+ contributors. Kudos for the invaluable work in advancing the science!
Check out our new paper on Diffusion Models!
Diffusion models are only easy to define with Gaussian or Categorical distributions. Feels very limiting! We show how to define diffusion-like models with any distribution from the exponential family with Star-Shaped DDPMs at the same cost as DDPM! arxiv.org/abs/2302.05259 TL;DR👇
United States เทรนด์
- 1. Cowboys 68.6K posts
 - 2. Nick Smith 14.2K posts
 - 3. Kawhi 4,313 posts
 - 4. Cardinals 30.7K posts
 - 5. #LakeShow 3,406 posts
 - 6. #WWERaw 61.9K posts
 - 7. Jerry 45.3K posts
 - 8. Kyler 8,430 posts
 - 9. Blazers 7,918 posts
 - 10. Jonathan Bailey 22.7K posts
 - 11. Logan Paul 10.2K posts
 - 12. No Luka 3,589 posts
 - 13. Jacoby Brissett 5,597 posts
 - 14. #WeTVAlwaysMore2026 195K posts
 - 15. Valka 4,833 posts
 - 16. Cuomo 174K posts
 - 17. Pickens 6,664 posts
 - 18. Dalex 2,535 posts
 - 19. Bronny 14.8K posts
 - 20. Koa Peat 6,281 posts
 
คุณอาจชื่นชอบ
- 
                                                
                                                    
                                                        Brandon Amos
@brandondamos - 
                                                
                                                    
                                                        Max Welling
@wellingmax - 
                                                
                                                    
                                                        Bernhard Schölkopf
@bschoelkopf - 
                                                
                                                    
                                                        Yarin
@yaringal - 
                                                
                                                    
                                                        Cambridge MLG
@CambridgeMLG - 
                                                
                                                    
                                                        Neil Lawrence
@lawrennd - 
                                                
                                                    
                                                        Yee Whye Teh
@yeewhye - 
                                                
                                                    
                                                        Tom Rainforth
@tom_rainforth - 
                                                
                                                    
                                                        Thomas Kipf
@tkipf - 
                                                
                                                    
                                                        WhiRL
@whi_rl - 
                                                
                                                    
                                                        Max Jaderberg
@maxjaderberg - 
                                                
                                                    
                                                        David Duvenaud
@DavidDuvenaud - 
                                                
                                                    
                                                        Ben Recht
@beenwrekt - 
                                                
                                                    
                                                        Ricky T. Q. Chen
@RickyTQChen - 
                                                
                                                    
                                                        yingzhen
@liyzhen2 
Something went wrong.
Something went wrong.