epsilon_XAI's profile picture. Research lab working on explanation (interfaces) for AI. Recsys/HCI/Diversification/NLG Web: https://www.tudelft.nl/index.php?id=55292 PI: @navatintarev

epsilon

@epsilon_XAI

Research lab working on explanation (interfaces) for AI. Recsys/HCI/Diversification/NLG Web: https://www.tudelft.nl/index.php?id=55292 PI: @navatintarev

epsilon reposted

📢Learning to design robust natural language generation models for explainable #AI (@NL4XAI 2nd Press Release) nl4xai.eu/news/natural-l… @MSCActions @EU_H2020 #AI

NL4XAI's tweet image. 📢Learning to design robust natural language generation models for explainable #AI  (@NL4XAI 2nd Press Release) nl4xai.eu/news/natural-l… 
@MSCActions @EU_H2020 #AI

epsilon reposted

👩‍💼The 3rd Executive Board Meeting just finished for @NL4XAI project. We had the opportunity to review and evaluate the project's performance to ensure its success, as we enter the second half of the project. Thanks to all members for attending this meeting!! @MSCActions @EU_H2020

NL4XAI's tweet image. 👩‍💼The 3rd Executive Board Meeting just finished for @NL4XAI project. We had the opportunity to review and evaluate the project's performance to ensure its success, as we enter the second half of the project. Thanks to all members for attending this meeting!!
@MSCActions @EU_H2020

epsilon reposted

🗣️ Explainable AI for non-expert users by @katrien_v at @APExUI_Workshop 🍭 Does explainability lead to trust? How do placebo explanations perform? 🕹 Controlability of the AI vs. UI's cognitive load 🧪 Adaptation to user expertise @ACMIUI @AugmentHCI

theaisland's tweet image. 🗣️ Explainable AI for non-expert users by @katrien_v at @APExUI_Workshop 

🍭 Does explainability lead to trust? How do placebo explanations perform?
🕹   Controlability of the AI vs. UI's cognitive load
🧪 Adaptation to user expertise

@ACMIUI @AugmentHCI
theaisland's tweet image. 🗣️ Explainable AI for non-expert users by @katrien_v at @APExUI_Workshop 

🍭 Does explainability lead to trust? How do placebo explanations perform?
🕹   Controlability of the AI vs. UI's cognitive load
🧪 Adaptation to user expertise

@ACMIUI @AugmentHCI
theaisland's tweet image. 🗣️ Explainable AI for non-expert users by @katrien_v at @APExUI_Workshop 

🍭 Does explainability lead to trust? How do placebo explanations perform?
🕹   Controlability of the AI vs. UI's cognitive load
🧪 Adaptation to user expertise

@ACMIUI @AugmentHCI
theaisland's tweet image. 🗣️ Explainable AI for non-expert users by @katrien_v at @APExUI_Workshop 

🍭 Does explainability lead to trust? How do placebo explanations perform?
🕹   Controlability of the AI vs. UI's cognitive load
🧪 Adaptation to user expertise

@ACMIUI @AugmentHCI


epsilon reposted

🗣 Comparing high dimensional maps for a more structured analysis of the models by @angie_boggust @bcarter755 @arvindsatya1 🔗 #IUI2022

[1/4] Excited to present the Embedding Comparator, an interactive system to compare embedded representations. By @angie_boggust, @bcarter755, & @arvindsatya1. Check it out at #IUI2022 this Friday! 📄 arxiv.org/abs/1912.04853 🎬 youtube.com/watch?v=UU5LAx… 💻 vis.mit.edu/embedding-comp…



epsilon reposted

📢#MSCAJobAlert Last days to apply to the PhD student position in #AI within @NL4XAI @MSCActions at @citiususc, ES. Join us and work on the following topic: From Grey-box Models to Explainable Models. ⌛️Deadline 31/03/2022 Apply👉nl4xai.eu/open_position/… @EU_H2020

NL4XAI's tweet image. 📢#MSCAJobAlert Last days to apply to the PhD student position in #AI within @NL4XAI @MSCActions 
at @citiususc, ES.  Join us and work on the following topic: From Grey-box Models to Explainable Models. ⌛️Deadline 31/03/2022 
Apply👉nl4xai.eu/open_position/… 
@EU_H2020

epsilon reposted

"Recommender systems under European AI regulations" disq.us/t/465eutv Available in the upcoming issue of @CACMmag Co-authored by @navatintarev, "Panagiota Fatourou", and @m_schedl @RecSys_c @ACMRecSys @sisinflab


epsilon reposted

Reminder - I'm looking for a PhD student to work on explaining Bayesian, probabilistic or statistical reasoning. Application deadline is 16 Jan nl4xai.eu/open_position/…


epsilon reposted

SIGIR 2022 has tracks for reproducibility papers, resource papers and perspectives papers besides the standard full/short paper tracks. sigir.org/sigir2022/subm… deadline for these special tracks is February 14, 2022.


Interesting challenges in explainable AI, and an open PhD position (which @navatintarev would co-supervise).


epsilon reposted

I'm looking for a PhD student to work on explaining Bayesian reasoning, as part of @NL4XAI nl4xai.eu/open_position/…


Panel 2: Building trust through Explainable AI complying with the European AI regulation. Hybrid event/online attendance is possible.

📢 European #AI #Regulation Week. Panel 2-Oct 7, 16:30 Panelists:Fosca Giannotti;@sierra_carles; @PrzeBiec; @Lina Rojas-Barahona.Moderators: @EttoreMariotti and @CarlosMougan.Opening: @sepajma @albugadiz lnkd.in/eTRtQ7Wa

NL4XAI's tweet image. 📢 European #AI #Regulation Week. Panel 2-Oct 7, 16:30
Panelists:Fosca Giannotti;@sierra_carles; @PrzeBiec; @Lina Rojas-Barahona.Moderators: @EttoreMariotti and @CarlosMougan.Opening: @sepajma @albugadiz  
lnkd.in/eTRtQ7Wa


epsilon reposted

Happy to share that our work “Design Implications for Explanations: A Case Study on Supporting Reflective Assessment of Potentially Misleading Videos” w/ @djurazzi @harmankkaur @elisab79 @navatintarev was accepted by @FrontiersIn AI for Human Learning and Behavior Change! (1/4)


epsilon reposted

The preliminary schedule for #INLG2021 is up! inlg2021.github.io/pages/programm… Reminder: you need to register by 5 September to get the Early Bird prices (100 GBP general 50 GBP student). After that the rates go up by 50%!


Two Epsilon papers to look out for @ACMHT. This Item Might Reinforce Your Opinion: Obfuscation and Labeling of Search Results to Mitigate Confirmation Bias & Exploring User Concerns about Disclosing Location and Emotion Information in Group Recommendations


Two abstracts were accepted to the @CACMmag Europe Workshop. 1) Led by @navatintarev (et al) on "Grand Challenges in Explainable AI"; 2) Led by @TommasoDiNoia (et al) on Responsible Recommendations.


Another success for team Epsilon! Not only is this a well-written paper, but this was truly a team effort with a lot of joy in the craft. So proud of you.

A timely and special piece of work titled, "A Checklist to Combat Cognitive Biases in Crowdsourcing" with the brilliant @tmdrws, @alisarieg, @oana_inel and @navatintarev has been accepted to @hcomp_conf. #HCOMP2021 @wisdelft (2/n)



United States Trends

Loading...

Something went wrong.


Something went wrong.