ExplainableNL's profile picture. Tweeting about Explainable AI in the Netherlands. Account run by @wzuidema  (http://amsterdam.explainable.nl) and others.

http://explainable.nl

explAInable.NL

@ExplainableNL

Tweeting about Explainable AI in the Netherlands. Account run by @wzuidema (http://amsterdam.explainable.nl) and others. http://explainable.nl

Exciting results from StanfordNLP (with D'Oosterlinck from Gent) on Causal Proxies: using symbolic surrogate models for interpreting deep learning, and testing for causality using counterfactual interventions.

🚨Preprint🚨 Interpretable explanations of NLP models are a prerequisite for numerous goals (e.g. safety, trust). We introduce Causal Proxy Models, which provide rich concept-level explanations and can even entirely replace the models they explain. arxiv.org/abs/2209.14279 1/7



explAInable.NL repostou

Symbolic regression (SR) is the problem of finding an accurate model of the data in the form of a (hopefully elegant) mathematical expression. SR has been thought to be hard and traditionally attempted using evolutionary algorithms. This begs the question: is SR NP-hard? 1/2


explAInable.NL repostou

Excited to see our work featured in the Fortune. Thank you so much, @jeremyakahn!


explAInable.NL repostou

This year's Spring Conference focuses on foundation models, accountable AI, and embodied AI. HAI Associate Director and event co-host @chrmanning explains these key areas and why you should not miss this event: stanford.io/3IxnjdH


explAInable.NL repostou

Interested in Explainable AI and Finance? Check out this opportunity for a Tenure Track Assistant Professor position at the Informatics Institute, University of Amsterdam! Deadline extended to 3 April 2022.


explAInable.NL repostou

And happy that also our work "On genetic programming representations and fitness functions for interpretable dimensionality reduction" made it to @GeccoConf! Preprint: arxiv.org/abs/2203.00528 A short explanation 👇 1/8

Happy to share that our work "Evolvability Degeneration in Multi-Objective Genetic Programming for Symbolic Regression" has been accepted at @GeccoConf! 🥳🪅🍾 Preprint: arxiv.org/abs/2202.06983. A high-level🧵 of what's going on here👇 1/8



explAInable.NL repostou

I visualized my last #semantle game with a UMAP of the word embeddings. Here's the result: bp.bleb.li/viewer?p=D5d3y Semantle is a word guessing game by @NovalisDMT where your guesses, unlike in #wordle, are ranked by their similarity in meaning, not spelling, to the secret word.


explAInable.NL repostou

📢#MSCAJobAlert Last days to apply to the PhD student position in #AI within @NL4XAI @MSCActions at @citiususc, ES. Join us and work on the following topic: From Grey-box Models to Explainable Models. ⌛️Deadline 31/03/2022 Apply👉nl4xai.eu/open_position/… @EU_H2020

NL4XAI's tweet image. 📢#MSCAJobAlert Last days to apply to the PhD student position in #AI within @NL4XAI @MSCActions 
at @citiususc, ES.  Join us and work on the following topic: From Grey-box Models to Explainable Models. ⌛️Deadline 31/03/2022 
Apply👉nl4xai.eu/open_position/… 
@EU_H2020

explAInable.NL repostou

📢Call for contributions to help identify Europe’s most Critical #OpenSourceSoftware ! We urge all national, regional and local public administrations across all EU 27 member states, to participate! Learn more👉europa.eu/!HXxQqp #FOSSEPS #ThinkOpen

EU_DIGIT's tweet image. 📢Call for contributions to help identify Europe’s most Critical #OpenSourceSoftware !
We urge all national, regional and local public administrations across all EU 27 member states, to participate! Learn more👉europa.eu/!HXxQqp
#FOSSEPS #ThinkOpen

"Transparency and explainability pertain to the technical domain ... leaving the ethics and epistemology of AI largely disconnected. In this talk, Russo will focus on how to remedy this problem and introduce an epistemology for glass box AI that can explicitly incorporate values"

Lecture by Federica Russo @federicarusso: Connecting the ethics and epistemology of AI. This Thursday 10 Feb, 12-13 h CET, online. Moderated by Aybüke Özgün. For more information and the way to get access, see: uva.nl/en/shared-cont…

RinekeV's tweet image. Lecture by Federica Russo @federicarusso: Connecting the ethics and epistemology of AI. 

This Thursday 10 Feb, 12-13 h CET, online. Moderated by Aybüke Özgün.

For more information and the way to get access, see: uva.nl/en/shared-cont…
RinekeV's tweet image. Lecture by Federica Russo @federicarusso: Connecting the ethics and epistemology of AI. 

This Thursday 10 Feb, 12-13 h CET, online. Moderated by Aybüke Özgün.

For more information and the way to get access, see: uva.nl/en/shared-cont…


A team of researchers from Amsterdam and Rome proposes CF-GNNExplainer: an explainability method for the popular Graph Neural Networks. The method iteratively removes edges from the graph, returning the minimal perturbation that leads to a change in prediction.

Excited that our paper with @maartjeterhoeve, @gtolomei, @mdr and @fabreetseo on counterfactual explanations for GNNs has been accepted to #AISTATS2022!!! Preprint available here: bit.ly/3If7Hfd 🥳



Excited that our paper with @maartjeterhoeve, @gtolomei, @mdr and @fabreetseo on counterfactual explanations for GNNs has been accepted to #AISTATS2022!!! Preprint available here: bit.ly/3If7Hfd 🥳



Interesting blogpost on SHAP / feature attribution using Shapley values, by researchers from Dutch medical AI company Pacmed.

At Pacmed we care about improving medical practice with the help of AI. We often use tree-based models 🌳 in combination with SHAP values to gain a better understanding of what models do. But... which version of SHAP is best to use? 1/3 pacmedhealth.medium.com/explainability…



explAInable.NL repostou

A paper in ICASSP 2020 proposed probing by "audification" of hidden representations in ASR model. They learn a speech synthesizer on top of the ASR representations. They have a nice video of their work here youtu.be/6gtn7H-pWr8

badr_nlp's tweet card. [ICASSP 2020] WHAT DOES A NETWORK LAYER HEAR? (Speaker: Chung-Yi Li)

youtube.com

YouTube

[ICASSP 2020] WHAT DOES A NETWORK LAYER HEAR? (Speaker: Chung-Yi Li)


explAInable.NL repostou

This paper analyzing discrete representations in models of spoken language with @bertrand_higy @liekegelderloos and @afraalishahi will appear at #BlackboxNLP #EMNLP2021 arxiv.org/abs/2105.05582


explAInable.NL repostou

Hot take from @wzuidema : progress in probing classifiers will not come from sophisticated probing techniques but from the hard work of forming better hypotheses.

boknilev's tweet image. Hot take from @wzuidema : progress in probing classifiers will not come from sophisticated probing techniques but from the hard work of forming better hypotheses.

Loading...

Something went wrong.


Something went wrong.