Chandar Lab
@ChandarLab
Sarath Chandar's research group at @polymtl and @Mila_Quebec. Our research focuses on lifelong learning, DL, RL, and NLP.
You might like
Please join us this Monday (Aug 19th) for a two-day symposium highlighting the research we did at @ChandarLab for the past year! Schedule: chandar-lab.github.io/CRLSymposium/2… Registration: crl-symposium-2024.eventbrite.com (with remote and in-person options)
Are self-explanations from Large Language Models faithful? We are answering this question at ACL 2024. Where: ACL 2024, A1. When: August 12th, 17:45-18:45. arXiv: 2401.07927.
I am very proud and happy to announce that our MSc graduate Ali Rahimi received the best Master's thesis award from the Canadian AI Association for 2024! Ali's masters thesis shows that SOTA MBRL methods like Dreamer and MuZero are not adaptive and he also has a fix!
We are very pleased to announce Ali Rahimi Kalahroudi (Université de Montréal) as the recipient of the CAIAC 2024 Best Master's Thesis Award. Ali's thesis was "Towards Adaptive Deep Model-Based Reinforcement Learning." caiac.ca/en/best-msc-aw…
Check out one of our latest lab papers
🚨Is solving complex tasks still challenging for your RL agent? 👑 Subgoal Distillation: A Method to Improve Small Language Agents Paper: arxiv.org/abs/2405.02749 w/ @EliasEskin @Cote_Marc @apsarathchandar
Our recent AAAI paper shows that certain attention heads in transformers are responsible for bias and pruning them improves fairness! In collaboration with Goncalo Mordido, @SamiraShabanian , @ioanauoft, and @apsarathchandar Paper 📰: arxiv.org/pdf/2312.15398…
🎉 Exciting start to 2024 for our lab! 🚀 Two papers accepted at ICLR, with one ranking in the top 1.2%! Plus, a publication in Digital Discovery journal. We are proud of our team's hard work and innovative research. #ICLR2024 #ResearchExcellence #MachineLearning #crystalDesign
At @ChandarLab, we are happy to announce the launch of our assistantce program to provide feedback for members of communities underrepresented in AI who wants to apply to high-profile graduate programs. Want feedback? Details: chandar-lab.github.io/grad-app-help/. Deadline: Nov 15!
Can large language models consolidate world knowledge? The answer turns out to be "NO". I am very excited to present to you our @emnlpmeeting 2023 paper (main track) which studies this important limitation of LLMs. Work led by my amazing PhD student @GabrielePrato!
🚨Is solving complex tasks still challenging for your RL agent? 👑 Subgoal Distillation: A Method to Improve Small Language Agents Paper: arxiv.org/abs/2405.02749 w/ @EliasEskin @Cote_Marc @apsarathchandar
If you wanna learn more about the recent advances in deep learning, reinforcement learning, and NLP that has come out of my lab in the past year, consider attending our lab's annual research symposium on Aug 8 and 9: chandar-lab.github.io/CRLSymposium/2… You can join remotely too!
It's time to mark your calendars! 🗓️ The official schedule for #CoLLAs2023 is now up at lifelong-ml.cc/Conferences/20…. Brace yourselves for a thrilling lineup of posters, tutorials, orals, talks, unconferences, and a dinner. See you in Montreal! 🌞🧠 Register at lifelong-ml.cc
Introducing an improved adaptive optimizer: Adam with Critical momenta (Adam+CM)! Unlike traditional Adam, it promotes exploration that paves the way to flatter minima and leads to better generalization. Link to our paper: arxiv.org/abs/2307.09638 Work led by: @pranshumalviya8
Want to know more about what is happening at @ChandarLab? Please join our annual research symposium (chandar-lab.github.io/CRLSymposium/) virtually or in person (Montreal) this August 11! You will hear my students talking about lifelong learning, reinforcement learning, NLP, and DL!
This is one of our efforts to promote research in lifelong learning. We are also organizing a new focused conference on Lifelong Learning (@CoLLAs_Conf) which is happening next month. You can register for the conference here: lifelong-ml.cc 4/4
I am very excited to release this primer on lifelong supervised learning: arxiv.org/abs/2207.04354. Lifelong learning is one of the most promising learning paradigms to achieve artificial general intelligence. 1/n
We are excited to invite submissions to the Workshop track of CoLLAs 2022! The workshop track has no proceedings and all accepted papers will be presented in a poster session. More details are available at lifelong-ml.cc/call_workshop Submission deadline: May 19, 2022, 11:59 pm (AoE)
We hope you are doing well during these tough times. The deadline for CoLLAs has been extended as follows: Abstract Deadline: March 7th (AOE) Paper Deadline: March 10th (AOE) We look forward to seeing your submission on lifelong learning! lifelong-ml.cc/call
The abstract deadline for CoLLAs is in 5 days, midnight on March 1st (AoE). We look forward to seeing your submissions on lifelong learning! lifelong-ml.cc/call
Only one week left for the application deadline! In addition to the listed topics (memory augmented networks, learning through language interaction, optimization, lifelong learning, RL), I am also looking for MSc/PhD students to work in the intersection of ML and Drug Discovery.
I have multiple open MSc/PhD positions on memory augmented neural nets, RL, Lifelong Learning, NLP for Fall 2022 at @ChandarLab / @Mila_Quebec /@polymtl! Details: chandar-lab.github.io/join/ Applications due Dec 1st: mila.quebec/en/cours/super…
I am very excited to release the recordings of my Reinforcement Learning lectures! You can watch the first-week lectures here: youtube.com/playlist?list=…. If you want to follow the course, readings, lecture notes, and assignments will be made available at chandar-lab.github.io/INF8953DE/
There are two motivations for interpretability: “scientific understanding" and “trust in AI”. Unfortunately, these are sometimes mixed which leads to an inappropriate judgment of papers. A 🧵 based on our survey, "Post-hoc Interpretability for Neural NLP".
Our new survey on post-hoc interpretability methods for NLP is out! This covers 19 specific interpretability methods, cites more than 100 publications, and took 1 year to write. I'm very happy this is now public, do consider sharing. Read arxiv.org/abs/2108.04840. A thread 🧵 1/6
United States Trends
- 1. Seahawks 24.4K posts
- 2. Giants 68.9K posts
- 3. Rams 18K posts
- 4. Bills 139K posts
- 5. Bears 61.6K posts
- 6. Daboll 13.2K posts
- 7. Dart 27.2K posts
- 8. Jags 7,169 posts
- 9. Caleb 50.5K posts
- 10. 49ers 16.1K posts
- 11. Texans 39K posts
- 12. Dolphins 34K posts
- 13. Josh Allen 17K posts
- 14. Browns 39.3K posts
- 15. Russell Wilson 4,171 posts
- 16. Niners 3,679 posts
- 17. #OnePride 3,048 posts
- 18. #RaiseHail 2,650 posts
- 19. Patriots 111K posts
- 20. Dan Campbell 2,052 posts
You might like
-
Balaraman Ravindran
@ravi_iitm -
Sarath Chandar
@apsarathchandar -
Devi Parikh
@deviparikh -
Rishabh Agarwal
@agarwl_ -
Anirudh Goyal
@anirudhg9119 -
Mohit Bansal
@mohitban47 -
Manish Gupta
@ManishGuptaMG1 -
Shagun Sodhani
@shagunsodhani -
Gaurav Aggarwal
@fooobar -
Aishwarya Agrawal
@aagrawalAA -
Divy Thakkar
@divy93t -
🇺🇦 Dzmitry Bahdanau
@DBahdanau -
Sharath Raparthy
@sharathraparthy -
Koustuv Sinha
@koustuvsinha -
Nan Rosemary Ke
@rosemary_ke
Something went wrong.
Something went wrong.