#acl2020nlp search results
🚨 @emnlp2020 preprint alert! In the sequel to our #acl2020nlp paper, we propose to make English NLP models robust to inflectional variation (both #adversarial and dialectal/natural) by encoding the semantic (base form) and grammatical information (inflection) separately. 1/
Check out Facebook AI workshops #ACL2020nlp. You can also learn about the research we're presenting here: ai.facebook.com/blog/facebook-…
Best demo award at ACL2020 (Honorable Mention) Prta: A System to Support the Analysis of Propaganda Techniques in the News Paper: aclweb.org/anthology/2020… Demo: tanbih.org/prta #acl2020nlp @giodsm
On the last day of #ACL2020nlp, catch Facebook AI researchers at the following workshops. Check out our papers here: ai.facebook.com/blog/facebook-…
Here’s a snapshot of today’s #ACL2020nlp workshops featuring Facebook AI research scientists. You can also learn about the research we're presenting here: ai.facebook.com/blog/facebook-…
We had a great time at #acl2020nlp last week. Big thanks to the organizers and everyone who came and hang out with us in our AMA on Rocket Chat 🤗🚀 We learned a ton from the many amazing presentations 🤯 A few of our researchers highlighted some of their favorites 👇
Thread about five #acl2020nlp papers that haven’t gotten the hype they deserve:
You can join us today at 06-07 or 14-15 PDT to talk about On Forgetting to Cite Older Papers: An Analysis of the ACL Anthology virtual.acl2020.org/paper_main.699… or ask me questions here if you are not registered for the #acl2020nlp conference.
Hey #acl2020nlp. Want to keep having valuable conversations even after Rocket Chat is gone? We do too! This week we launched discuss.huggingface.co/c/research/, a section of our online forum dedicated specifically to research discussions. We'd love to see you there! 🤗🔬🚀 #openscience
I just published Highlights of ACL 2020 link.medium.com/SEadDqqd17 #acl2020nlp
Very happy, honoured and pleasantly surprised to be on the list of "Outstanding Reviewers" for #acl2020nlp! [Aim for next year: get in the list of "Authors" 😛] #NLProc #ACL2020
Fun Fact #3: People might be surprised to know that it's common to find wild foxes around the city. Some are so accustomed to people that they happily walk around people. As an example, @tcddublin has its resident fox called Sam: irishtimes.com/news/ireland/i… #acl2020nlp #NLProc
My wonderful sister @Kurfuerstin (not at #acl2020nlp) crocheted this cute octopus and allowed me to share it here! 🥰🥰🥰🐙 @emilymbender @alkoller
#acl2020nlp research stands to gain a lot from a global human rights lens! This Friday, we are holding a first-ever satellite session of our EMNLP Workshop on Online Abuse & Harms (@AbusiveLangWS) at #RightsCon, the biggest conference at the intersection of Human Rights and Tech
excited to present our paper on studying biases in sentence encoders at #acl2020nlp: web: virtual.acl2020.org/paper_main.488… code: github.com/pliang279/sent… also happy to take questions during the live Q&A sessions: July 7 (14:00-15:00, 17:00-18:00 EDT) w Irene, Emily, YC, @rsalakhu, LP
A paper that asks the right questions recently earned @EzraWu and @mdredze a Best Long Paper Award at #Repl4NLP at #acl2020nlp! Story: clsp.jhu.edu/2020/07/23/pap…
#acl2020nlp #acl2020en NLP COVID-19 Workshop (NLPCovid) is a last minute workshop at #acl2020nlp. I invite you to check out the paper "Jennifer for COVID-19: An NLP-Powered Chatbot Built for the People and by the People to Combat Misinformation" presented by Patricia Silveyra.
Fantastic work from @HJCH0 and @jonathanmay on improvisational dialogue agents at #acl2020nlp. It is a great corpus of ~68k "Yes, and..." dialogue pairs. As a topic near and dear to my heart, and the subject of my PhD work, I am happy & excited to see this data released. 1/6
Hey #acl2020nlp workshoppers! Stop by WNGT Friday for the results of the @duolingo STAPLE shared task... Live overview talk @ noon EDT: virtual.acl2020.org/workshop_W12.h… aclweb.org/anthology/2020… Chat + more info: acl2020.rocket.chat/channel/wngt20… sites.google.com/view/wngt20/pr… sharedtask.duolingo.com
Watching all the talks at a 1.5x minimum speed and jumping on zoom calls: "why are you talking so slowly? ooh snap, it's just normal speed... and I'm even slower than you..." #acl2020nlp
ACL ended two weeks ago and we enjoyed all fruitful discussions there! We had 3 papers on Open IE: benchmark, annotation platform for such benchmarks & multilingual OIE model. In case you didn't have a chance to visit our talks & posters, checkout our papers🧵#acl2020nlp #NLProc
#acl2020nlp kongresuan @aormazabalo-k parafrasien sorkuntzari buruzko lana aurkeztu du.
Happy to have presented our work on paraphrase generation with parallel corpora at ACL! Check out the paper: aclanthology.org/2022.acl-long.… We characterize the implicit paraphrase similarity function underlying round-trip MT, and find it is susceptible to confounding translations.
📄 "Bag-of-Words vs. Graph vs. Sequence in Text Classification: Questioning the Necessity of Text-Graphs and the Surprising Strength of a Wide MLP" with @ansgarscherp #acl2020nlp #ACL2022 #NLProc Link: aclanthology.org/2022.acl-long.… 🧵⬇️
Hark! The #acl2020nlp interactive program is now available! Come for the program...stay for the viz! 2022.aclweb.org/interactive-co… #acl2022 #NLProc #ACLinDublin
Fun Fact #3: People might be surprised to know that it's common to find wild foxes around the city. Some are so accustomed to people that they happily walk around people. As an example, @tcddublin has its resident fox called Sam: irishtimes.com/news/ireland/i… #acl2020nlp #NLProc
Was tickled this morning to discover that @techreview ran a piece with detailed coverage of my #acl2020nlp paper with @alkoller technologyreview.com/2021/08/25/103… Neither of us were interviewed; quotes attributed to me are really from the paper.
We released our dialogue response selection test set with human scores, which is a part of our #acl2020nlp work. We hope this release benefits research on dialogue systems! @NlpTohoku #NLProc - paper: aclanthology.org/2020.acl-main.… - testset: github.com/cl-tohoku/eval…
github.com
GitHub - cl-tohoku/eval-via-selection
Contribute to cl-tohoku/eval-via-selection development by creating an account on GitHub.
I’m happy to announce that my work with @jqk09a, @blankeyelephant, Jun Suzuki, and @inuikentaro at @NlpTohoku, “Evaluating Dialogue Generation Systems via Response Selection” was accepted in #acl2020nlp! #NLProc Our paper now in arxiv! arxiv.org/abs/2004.14302
This volunteered for #acl2020nlp and I met a lot of people in #NLProc. As a result, six months later, I was being elected as officer for the NAACL exec board. Please join us this year to raise your visibility and the visibility of underrepresented authors. Thank you!
Do you want to be a live microblogging volunteer? Do you want to raise the visibility of #ACL2021NLP talks on different platforms and using different languages? The call for microblogging volunteers is out! 2021.aclweb.org/mybb/showthrea… #NLProc
Why do you think live-tweeting doesn't work with virtual conferences? I remember #acl2020nlp tweeting was pretty lively. I felt that with introduction of gather-town, it should have been better, but doesn't seem like that.
Our pick of the week: Rabeeh Karimi Mahabadi et. al #ACL2020nlp paper "End-to-End Bias Mitigation by Modelling Biases in Corpora". By @mgaido91 #nlproc @KarimiRabeeh @boknilev @JamieBHenderson
an interesting approach to avoid models taking decisions based on wrong, but simple shortcuts: aclweb.org/anthology/2020… Looking forward for more works like this to improve the robustness of our models! @fbk_mt
[1/4] “Masked Language Model Scoring” is in #acl2020nlp! Score sentences with any BERT variant via mask+predict (works w/ @huggingface). Improves ASR, NMT, acceptability. Paper: arxiv.org/abs/1910.14659 Code: github.com/awslabs/mlm-sc… (w/ @LiangDavis, @toannguyen177, Katrin K.)
Does Multilingual BERT share syntactic knowledge cross-lingually? In #acl2020nlp paper w/ @johnhewtt and @chrmanning, we visualize its syntactic structure & show it's applicable to a variety of human languages. Paper: arxiv.org/abs/2005.04511 Blog: ethanachi.com/multilingual-p… (1/4)
New #acl2020nlp paper "Logic-Guided Data Augmentation and Regularization for Consistent Question Answerings"! We show SOTA QA models produce inconsistent predictions and introduce logic-guided data augmentation & consistency-based regularization. arxiv.org/abs/2004.10157 1/
How to fill in the __blanks__ using language models 1. Download your favorite language model ❤️ 2. Fine-tune the model on infilling examples 🤖 3. Use the model to fill in any number of blanks in text! 😮😮😮 #StanfordNLP #ACL2020NLP @chrisdonahuey arxiv.org/abs/2005.05339
My first PhD paper!😀 (at #acl2020nlp, w. @mohitban47 @uncnlp) "Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?" We measure how 5 explanation methods (LIME, Anchor, Prototype, Decision Boundary, Composite) improve simulatability...1/5
We are excited to announce our keynote speakers Kathleen R. McKeown (Columbia) and Josh Tennenbaum (MIT)! acl2020.org/program/keynot… #acl2020nlp #nlproc
The best paper award of #acl2020nlp plus one honorable mention goes to work that rethinks our experiment and evaluation methodology. Proud of the NLP research community which is capable of self-reflection and cares about real-world impacts.
Happy to share that my work with @yoavgo, "Unsupervised Domain Clusters in Pretrained Language Models", was accepted as a long paper in #acl2020nlp! We show that pretrained LMs implicitly cluster textual data by domains, and how we can use this for domain data selection in NMT.
Excited to announce our latest work on Explaining Solutions to Physical ReasonIng Tasks (ESPRIT), an interpretable framework for representing the complex physical concepts such as gravity, friction, and collision using natural language accepted at #acl2020nlp!
#acl2020nlp T6 chat question on commonsense: Having trouble understanding what constitutes commonsense. Bear having fur, a dove's color, are these commonsense (they were examples in the slides). These seem factual info. Can you please clarify. @YejinChoinka's solid response.
To what extent are DNN models capable of learning the compositional generalization underlying NLI from given training instances? Check out our #acl2020nlp paper "Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language?” arxiv.org/abs/2004.14839 #NLProc
Congrats to our awesome students+collaborators for their 6 long #acl2020nlp papers** (#silverlining for students in these tough times)!🙂 +BIG thanks to PCs @natschluter, @Tetreault_NLP, Joyce, & the full reviewers+ACs+SACs @aclmeeting team for huge effort! #UNCNLP #ACL2020 1/2
Our #acl2020nlp paper is now online: arxiv.org/pdf/1910.03065…! An adversarial framework for finding inconsistent natural language explanations. -- @BrendanShilling, @PMinervini, Thomas Lukasiewicz, and Phil Blunsom
Something went wrong.
Something went wrong.
United States Trends
- 1. #SmackDown 20.6K posts
- 2. Zack Ryder N/A
- 3. Clemson 5,175 posts
- 4. #OPLive N/A
- 5. Landry Shamet N/A
- 6. Bubba 44.3K posts
- 7. Jey Uso 3,529 posts
- 8. Bill Clinton 147K posts
- 9. End 1Q N/A
- 10. Drummond 1,186 posts
- 11. Cam Boozer N/A
- 12. Kevin James 8,520 posts
- 13. #TNATurningPoint 3,769 posts
- 14. Mitchell Robinson N/A
- 15. Matt Cardona N/A
- 16. Ersson N/A
- 17. Josh Hart N/A
- 18. The Miz 4,065 posts
- 19. Nikes 1,201 posts
- 20. #OPNation N/A