bayesgroup's profile picture. Research in Bayesian Deep Learning, Reinforcement Learning, Optimization, Structured Prediction, Drug Discovery and more

Bayesian Methods Research Group

@bayesgroup

Research in Bayesian Deep Learning, Reinforcement Learning, Optimization, Structured Prediction, Drug Discovery and more

Bayesian Methods Research Group รีโพสต์แล้ว

1/ Can we efficiently learn the destruction process of diffusion samplers? Can we learn not just the drift, but also the variance for all transition kernels? – We answer YES in our recent paper “Adaptive Destruction Processes for Diffusion Samplers” (Oral at NeurIPS 2025 FPI…

gritsaev's tweet image. 1/ Can we efficiently learn the destruction process of diffusion samplers? Can we learn not just the drift, but also the variance for all transition kernels? – We answer YES in our recent paper “Adaptive Destruction Processes for Diffusion Samplers” (Oral at NeurIPS 2025 FPI…

Check out our new paper!

(1/n) The usual assumption in GFlowNet environments is acyclicity. Have you ever wondered if it can be relaxed? Does the existing GFlowNet theory translate to the non-acyclic case? Is efficient training possible? We shed new light on these questions in our latest work! @icmlconf

nvimorozov's tweet image. (1/n) The usual assumption in GFlowNet environments is acyclicity. Have you ever wondered if it can be relaxed? Does the existing GFlowNet theory translate to the non-acyclic case? Is efficient training possible?

We shed new light on these questions in our latest work! @icmlconf


Check out our new work!

🚨 New paper alert! 🚨 Our new paper on ArXiv: "DreamBooth DPO: Controlled Optimization of Personalized Diffusion Models" It addresses the core trade-off in personalized T2I: concept fidelity vs. prompt alignment, without any human-curated data 👉 arxiv.org/abs/2505.20975 1/5

MNakhodnov's tweet image. 🚨 New paper alert! 🚨
Our new paper on ArXiv: "DreamBooth DPO: Controlled Optimization of Personalized Diffusion Models"

It addresses the core trade-off in personalized T2I: concept fidelity vs. prompt alignment, without any human-curated data
👉 arxiv.org/abs/2505.20975

1/5


Check out our new work!

🚨 New paper alert! 🚨 Our new paper on ArXiv: "ImageReFL: Balancing Quality and Diversity in Human-Aligned Diffusion Models". It tackles a key challenge in diffusion models: aligning with human preferences without collapsing diversity 👉 arxiv.org/abs/2505.22569 1/5

MNakhodnov's tweet image. 🚨 New paper alert! 🚨
Our new paper on ArXiv: "ImageReFL: Balancing Quality and Diversity in Human-Aligned Diffusion Models". 

It tackles a key challenge in diffusion models: aligning with human preferences without collapsing diversity

👉 arxiv.org/abs/2505.22569

1/5


Check our new paper! Catch the presentation at #ICLR2025 by @gritsaev!

1/ GFlowNets are known for training a forward policy to generate complex objects step by step. However, an equally important piece specific to the GFlowNet paradigm is a backward policy, which undoes these steps and plays a crucial role in training.

gritsaev's tweet image. 1/ GFlowNets are known for training a forward policy to generate complex objects step by step. However, an equally important piece specific to the GFlowNet paradigm is a backward policy, which undoes these steps and plays a crucial role in training.


Bayesian Methods Research Group รีโพสต์แล้ว

👨‍💼Neural Flow Diffusion Models at #NeurIPS2024 tomorrow! Discover how to build learnable noising processes for straight-line generative trajectories end-to-end and without simulations!🤯 📍West Ballroom A-D #6809 ⏰Fri 13 Dec 4:30 pm — 7:30 pm 🔗neurips.cc/virtual/2024/p…

GrigoryBartosh's tweet image. 👨‍💼Neural Flow Diffusion Models at #NeurIPS2024 tomorrow! Discover how to build learnable noising processes for straight-line generative trajectories end-to-end and without simulations!🤯

📍West Ballroom A-D #6809
⏰Fri 13 Dec 4:30 pm — 7:30 pm

🔗neurips.cc/virtual/2024/p…

🔥 Excited to share our new work on Neural Flow Diffusion Models — a general, end-to-end, simulation-free framework that works with an arbitrary noising processes and even enables learning them! 📜: arxiv.org/abs/2404.12940 🧵 1/11



Check out our new paper! To be presented at #NeurIPS2024 by @KateLobacheva this Friday (poster #2408 / Poster Session 5 East / 13 Dec 11 am – 2 pm PST)

Starting training with a large learning rate benefits generalization—but why? In our new #NeurIPS2024 paper, we investigate its role in navigating the loss landscape and its effect on feature learning! 1/7 Paper: arxiv.org/abs/2410.22113 Poster: nips.cc/virtual/2024/p…

KateLobacheva's tweet image. Starting training with a large learning rate benefits generalization—but why? In our new #NeurIPS2024 paper, we investigate its role in navigating the loss landscape and its effect on feature learning! 1/7

Paper: arxiv.org/abs/2410.22113
Poster: nips.cc/virtual/2024/p…


Bayesian Methods Research Group รีโพสต์แล้ว

Did you know that networks trained with different learning rates extract different features (and a different number of them!) from the data? Come by our poster at HiLD Workshop #ICML2024 tomorrow to discuss it with @irsadrtdinov! Paper: openreview.net/forum?id=IID2D… 1/6


Bayesian Methods Research Group รีโพสต์แล้ว

I will be presenting our NeurIPS-2023 paper arxiv.org/abs/2303.03374 at @ml_collective this Friday, March 8, 10am PT / 7pm CET! If you haven't decided yet whether to stay in the pre-train basin or not, you definitely need to see this talk!

irsadrtdinov's tweet image. I will be presenting our NeurIPS-2023 paper
arxiv.org/abs/2303.03374 at @ml_collective this Friday, March 8, 10am PT / 7pm CET!
If you haven't decided yet whether to stay in the pre-train basin or not, you definitely need to see this talk!

Check out our new paper!

🌟 News from the GFlowNet world: our paper “Generative Flow Networks as Entropy-Regularized RL” was honored with oral presentation at #AISTATS2024! Long story short, our result can be described by this picture.

dtiapkin's tweet image. 🌟 News from the GFlowNet world: our paper “Generative Flow Networks as Entropy-Regularized RL” was honored with oral presentation at #AISTATS2024! Long story short, our result can be described by this picture.


At #NeurIPS2023? Come check out our latest work!

Large learning rates improve generalization, but are they all beneficial? The short answer is No, for more details check out our paper at the #NeurIPS2023 Mathematics of Modern Machine Learning (M3L) Workshop! Paper: arxiv.org/abs/2311.11303 1/4

KateLobacheva's tweet image. Large learning rates improve generalization, but are they all beneficial? The short answer is No, for more details check out our paper at the #NeurIPS2023 Mathematics of Modern Machine Learning (M3L) Workshop!

Paper: arxiv.org/abs/2311.11303

1/4


Bayesian Methods Research Group รีโพสต์แล้ว

Can we improve ensembles in the transfer learning setup by exploring the target task loss landscape? Find out in our new #NeurIPS2023 paper! Joint work with Ildus Sadrtdinov, Dmitrii Pozdeev, and Dmitry Vetrov. Paper: arxiv.org/abs/2303.03374 1/7

KateLobacheva's tweet image. Can we improve ensembles in the transfer learning setup by exploring the target task loss landscape? Find out in our new #NeurIPS2023 paper! Joint work with Ildus Sadrtdinov, Dmitrii Pozdeev, and Dmitry Vetrov.

Paper: arxiv.org/abs/2303.03374

1/7

Check out our newest paper!

📢 Exciting News! Our paper on StyleDomain for One-shot and Few-shot Domain Adaptation, accepted to ICCV 2023, is out! 📝🔥 📄 Paper Link: arxiv.org/abs/2212.10229 🔗 Source Code: github.com/AIRI-Institute… #ICCV2023 #GANs #StyleDomain #DomainAdaptation 1/N 🧵

ai_alanov's tweet image. 📢 Exciting News! Our paper on StyleDomain for One-shot and Few-shot Domain Adaptation, accepted to ICCV 2023, is out! 📝🔥

📄 Paper Link: arxiv.org/abs/2212.10229
🔗 Source Code: github.com/AIRI-Institute…

#ICCV2023 #GANs #StyleDomain #DomainAdaptation

1/N 🧵


🎉 Celebrating Dmitry Vetrov & team (Pockonechnyy, @MNakhodnov, Elistratov, @ai_alanov, @MeshchaninovV) for being named #CVPR2023 Outstanding Reviewers. With exceptional reviews, they stood out among 7000+ contributors. Kudos for the invaluable work in advancing the science!


Check out our new paper on Diffusion Models!

Diffusion models are only easy to define with Gaussian or Categorical distributions. Feels very limiting! We show how to define diffusion-like models with any distribution from the exponential family with Star-Shaped DDPMs at the same cost as DDPM! arxiv.org/abs/2302.05259 TL;DR👇



Loading...

Something went wrong.


Something went wrong.