#interpretableml search results

The first version of my online book on Interpretable Machine Learning is out! christophm.github.io/interpretable-… I am very excited to release it. It's a guide for making machine learning models explainable. #interpretableML #iml #ExplainableAI #xai #MachineLearning #DataScience

ChristophMolnar's tweet image. The first version of my online book on Interpretable Machine Learning is out! 

christophm.github.io/interpretable-…

I am very excited to release it. It's a guide for making machine learning models explainable.
#interpretableML  #iml  #ExplainableAI  #xai  #MachineLearning #DataScience

Giving a talk on Explainable AI in Healthcare at @CTSICN in hour #responsibleAI #InterpretableML #explainableAI

vonaurum's tweet image. Giving a talk on Explainable AI in Healthcare at @CTSICN in hour #responsibleAI #InterpretableML #explainableAI

We are having a mini-session in #causality at #pbdw2019 with first two talks by Rich Caruana and Yi Luo! #interpretableML #radonc @UMichRadOnc @MSFTResearch

DrJohnKang's tweet image. We are having a mini-session in #causality at #pbdw2019 with first two talks by Rich Caruana and Yi Luo! #interpretableML #radonc @UMichRadOnc @MSFTResearch
DrJohnKang's tweet image. We are having a mini-session in #causality at #pbdw2019 with first two talks by Rich Caruana and Yi Luo! #interpretableML #radonc @UMichRadOnc @MSFTResearch
DrJohnKang's tweet image. We are having a mini-session in #causality at #pbdw2019 with first two talks by Rich Caruana and Yi Luo! #interpretableML #radonc @UMichRadOnc @MSFTResearch
DrJohnKang's tweet image. We are having a mini-session in #causality at #pbdw2019 with first two talks by Rich Caruana and Yi Luo! #interpretableML #radonc @UMichRadOnc @MSFTResearch

Relying on XGBoost/RF feature importances to interpret your model? Read this first towardsdatascience.com/interpretable-… You are probably reporting misleading conclusions #XAI #interpretableML

leonardo_noleto's tweet image. Relying on XGBoost/RF feature importances to interpret your model? Read this first towardsdatascience.com/interpretable-… You are probably reporting misleading conclusions #XAI #interpretableML

Our paper "NoiseGrad: enhancing explanations by introducing stochasticity to model weights" has been accepted at #AAAI2022 🎉 See you (fingers crossed) in Canada 🇨🇦 arxiv.org/abs/2106.10185 #ML #InterpretableML #XAI

UMI_Lab_AI's tweet image. Our paper "NoiseGrad: enhancing explanations by introducing stochasticity to model weights" has been accepted at #AAAI2022 🎉
See you (fingers crossed) in Canada 🇨🇦

arxiv.org/abs/2106.10185
#ML #InterpretableML #XAI

Looking forward to the 1st #XAI day webinar tomorrow, Sept. 3rd, at @DIAL_UniCam. Many thanks to the speakers @grau_isel, Eric S. Vorm and @leilanigilpin who will talk about the essentials of #interpretableML and its applications.

amanuel_herrera's tweet image. Looking forward to the 1st #XAI day webinar tomorrow, Sept. 3rd, at @DIAL_UniCam. Many thanks to the speakers @grau_isel, Eric S. Vorm and @leilanigilpin who will talk about the essentials of #interpretableML and its applications.

The first session of oral presentations has concluded, very interesting talks from A. Himmelhuber (Siemens) and M. Couceiro (U. Lorraine) on the topics GNN explanations and tranferability of analogies learned via DNNs. #AIMLAI @ECMLPKDD 2021. #xai #interpretableML

jaom7's tweet image. The first session of oral presentations has concluded, very interesting talks from A. Himmelhuber (Siemens) and M. Couceiro (U. Lorraine) on the topics GNN explanations and tranferability of analogies learned via DNNs. #AIMLAI @ECMLPKDD 2021. #xai #interpretableML
jaom7's tweet image. The first session of oral presentations has concluded, very interesting talks from A. Himmelhuber (Siemens) and M. Couceiro (U. Lorraine) on the topics GNN explanations and tranferability of analogies learned via DNNs. #AIMLAI @ECMLPKDD 2021. #xai #interpretableML

I've created a short demo in #rstats on how to enforce monotonic constraints in H2O #AutoML / Stacked Ensembles. #InterpretableML #xai Thanks to @Navdeep_Gill_ for the partial dependence plots! @h2oai 👉 gist.github.com/ledell/91beb92…

ledell's tweet image. I've created a short demo in #rstats on how to enforce monotonic constraints in H2O #AutoML / Stacked Ensembles.  #InterpretableML #xai  Thanks to @Navdeep_Gill_ for the partial dependence plots! @h2oai 

👉 gist.github.com/ledell/91beb92…
ledell's tweet image. I've created a short demo in #rstats on how to enforce monotonic constraints in H2O #AutoML / Stacked Ensembles.  #InterpretableML #xai  Thanks to @Navdeep_Gill_ for the partial dependence plots! @h2oai 

👉 gist.github.com/ledell/91beb92…

#KDD2019 keynote speaker @CynthiaRudin on #InterpretableML - recidivism models all perform about the same & Complicated models are preferred because they are profitable #explainableAI @kdd_news

vanimt's tweet image. #KDD2019 keynote speaker @CynthiaRudin on #InterpretableML - recidivism models all perform about the same & Complicated models are preferred because they are profitable #explainableAI  @kdd_news
vanimt's tweet image. #KDD2019 keynote speaker @CynthiaRudin on #InterpretableML - recidivism models all perform about the same & Complicated models are preferred because they are profitable #explainableAI  @kdd_news
vanimt's tweet image. #KDD2019 keynote speaker @CynthiaRudin on #InterpretableML - recidivism models all perform about the same & Complicated models are preferred because they are profitable #explainableAI  @kdd_news

.@aghaei_sina will be presenting our paper on strong formulations for optimal classification trees tomorrow at #MIP2020 — joint work with the one and only @GomezAndres8 #MIPforMachineLearning #InterpretableML

phebe_vayanos's tweet image. .@aghaei_sina will be presenting our paper on strong formulations for optimal classification trees tomorrow at #MIP2020 — joint work with the one and only @GomezAndres8 

#MIPforMachineLearning #InterpretableML

#ECDA2018 @hnfpb @RealGabinator thanks! Nice talk and nice overview about explanation methods in #DNN. #InterpretableML

birte_cs's tweet image. #ECDA2018 @hnfpb @RealGabinator thanks! Nice talk and nice overview about explanation methods in #DNN. #InterpretableML

We have 2 open #PhD positions where fellows will work with 24/7 recordings of physical activity: one focusing on #InterpretableML and one on the risk of common non-communicable diseases. Plz share the post in your network & help us find talented students: s.ntnu.no/labda

kerstin_bach's tweet image. We have 2 open #PhD positions where fellows will work with 24/7 recordings of physical activity: one focusing on #InterpretableML and one on the risk of common non-communicable diseases.

Plz share the post in your network & help us find talented students: s.ntnu.no/labda
kerstin_bach's tweet image. We have 2 open #PhD positions where fellows will work with 24/7 recordings of physical activity: one focusing on #InterpretableML and one on the risk of common non-communicable diseases.

Plz share the post in your network & help us find talented students: s.ntnu.no/labda

#AIMLAI @ECMLPKDD 2021 has started. Right now Prof Zhou (@zhoubolei) from CUHK is giving a keynote highlighting the efforts of his team towards making deep AI models think like humans. #xai #interpretableml #ai

jaom7's tweet image. #AIMLAI @ECMLPKDD 2021 has started. Right now Prof Zhou (@zhoubolei) from CUHK is giving a keynote highlighting the efforts of his team towards making deep AI models think like humans. #xai #interpretableml #ai

Yes, we did it again :-) Benedikt Bönninghoff, Robert Nickel and I - again at 1st place in the 2021 PAN@CLEF author identification challenge pan.webis.de/clef21/pan21-w… #interpretableML #machinelearning @HGI_Bochum @CASA_EXC @ika_rub @ruhrunibochum


NoiseGrad allows to enhance local explanations of #ML models by introducing stochasticity to the weights. However, it is also possible to improve Global Explanations! Check out by yourself! #XAI #InterpretableML github.com/understandable…

UMI_Lab_AI's tweet image. NoiseGrad allows to enhance local explanations of #ML models by introducing stochasticity to the weights. However, it is also possible to improve Global Explanations! Check out by yourself! #XAI #InterpretableML
github.com/understandable…

Our #ICLR paper, “Efficient & Accurate Explanation Estimation with Distribution Compression” made the top 5.1% of submissions and was selected as a Spotlight! Congrats to the first author @hbaniecki #xAI #interpretableML Paper: arxiv.org/abs/2406.18334


🚨 Are you at #INFORMS2024? Don't miss our session on Emerging Trends in Interpretable Machine Learning today at 2:15 PM! 🌟 Our speakers will dive into theoretical and applied aspects of interpretability and model multiplicity. 💡#InterpretableML #TrustworthyAI #Multiplicity


#ICML #BayesianDeepLearning #InterpretableML #FoundationModel Ever wondered what concepts vision foundation models (e.g., ViTs) learn and use to make predictions?


🚀Just published our new paper in @EarthsFutureEiC! 🌍 We propose how #InterpretableML can be more broadly and effectively integrated into geoscientific research, highlighting key do's and don'ts when using IML for process understanding. Check it out: agupubs.onlinelibrary.wiley.com/doi/full/10.10…


#BayesianDeepLearning #InterpretableML #ConceptInterpretation #HumanAICollaboration Achieving the balance between accuracy and interpretability in machine learning models is a notable challenge. Models that are accurate often lack interpretability,


🌟 Seeking Postdoc Position in Interpretable Machine Learning! 🤖🔍 Strong ML background, eager to contribute to cutting-edge research. Looking for opportunities to collaborate and make an impact. #InterpretableML #Postdoc #AIResearch #phdchat


Two of my PRs are now merged with PySR: now you can use "min", "max" and "round" operators without any explicit sympy mapping. PySR is a Python interface to a Julia backend for Symbolic Regression #interpretableML github.com/MilesCranmer/P…

github.com

GitHub - MilesCranmer/PySR: High-Performance Symbolic Regression in Python and Julia

High-Performance Symbolic Regression in Python and Julia - MilesCranmer/PySR


We have in mind several directions to further investigate the topic in the future and we are excited about it, so we encourage and welcome any feedback or exchange of ideas on the paper’s topic! #deeplearning #explainableAI #interpretableML /5


#ICML2023 #BayesDL #InterpretableML Can we train self-interpretable time series models that generate actionable explanations? Come check out our Counterfactual Time Series (CounTS) in the oral session C2 at 3pm~4:30pm, July 27 and poster session 11:00am~1:30pm on July 25, Hall 1.


2️⃣ "Interpretable Machine Learning: A Guide for Making Black Box Models Explainable" by Christoph Molnar. Explore techniques to understand and interpret complex machine learning models, ensuring transparency and trust in AI systems. #InterpretableML #ExplainableAI


We have 2 open #PhD positions where fellows will work with 24/7 recordings of physical activity: one focusing on #InterpretableML and one on the risk of common non-communicable diseases. Plz share the post in your network & help us find talented students: s.ntnu.no/labda

kerstin_bach's tweet image. We have 2 open #PhD positions where fellows will work with 24/7 recordings of physical activity: one focusing on #InterpretableML and one on the risk of common non-communicable diseases.

Plz share the post in your network & help us find talented students: s.ntnu.no/labda
kerstin_bach's tweet image. We have 2 open #PhD positions where fellows will work with 24/7 recordings of physical activity: one focusing on #InterpretableML and one on the risk of common non-communicable diseases.

Plz share the post in your network & help us find talented students: s.ntnu.no/labda

CALL FOR PAPERS AND ABSTRACTS “Explainable Artificial Intelligence For Unveiling The Brain: From The Black-Box To The Glass-Box” BrainInformatics2023 #explainableAI #explainableML #interpretableML #XAI #ArtificialIntelligence #MachineLearning #neuroscience #neuroimaging #brain

AlessiaSarica's tweet image. CALL FOR PAPERS AND ABSTRACTS “Explainable Artificial Intelligence For Unveiling The Brain: From The Black-Box To The Glass-Box” BrainInformatics2023 #explainableAI #explainableML #interpretableML #XAI #ArtificialIntelligence #MachineLearning #neuroscience #neuroimaging #brain

The first version of my online book on Interpretable Machine Learning is out! christophm.github.io/interpretable-… I am very excited to release it. It's a guide for making machine learning models explainable. #interpretableML #iml #ExplainableAI #xai #MachineLearning #DataScience

ChristophMolnar's tweet image. The first version of my online book on Interpretable Machine Learning is out! 

christophm.github.io/interpretable-…

I am very excited to release it. It's a guide for making machine learning models explainable.
#interpretableML  #iml  #ExplainableAI  #xai  #MachineLearning #DataScience

We are having a mini-session in #causality at #pbdw2019 with first two talks by Rich Caruana and Yi Luo! #interpretableML #radonc @UMichRadOnc @MSFTResearch

DrJohnKang's tweet image. We are having a mini-session in #causality at #pbdw2019 with first two talks by Rich Caruana and Yi Luo! #interpretableML #radonc @UMichRadOnc @MSFTResearch
DrJohnKang's tweet image. We are having a mini-session in #causality at #pbdw2019 with first two talks by Rich Caruana and Yi Luo! #interpretableML #radonc @UMichRadOnc @MSFTResearch
DrJohnKang's tweet image. We are having a mini-session in #causality at #pbdw2019 with first two talks by Rich Caruana and Yi Luo! #interpretableML #radonc @UMichRadOnc @MSFTResearch
DrJohnKang's tweet image. We are having a mini-session in #causality at #pbdw2019 with first two talks by Rich Caruana and Yi Luo! #interpretableML #radonc @UMichRadOnc @MSFTResearch

Giving a talk on Explainable AI in Healthcare at @CTSICN in hour #responsibleAI #InterpretableML #explainableAI

vonaurum's tweet image. Giving a talk on Explainable AI in Healthcare at @CTSICN in hour #responsibleAI #InterpretableML #explainableAI

An exponential growth in the application of machine learning (ML) models. But several times I run into this. #Data #InterpretableML #BlackBoxModels #ArtificialIntelligence

MichaelNdiye's tweet image. An exponential growth in the application of machine learning (ML) models. But several times I run into this. #Data #InterpretableML #BlackBoxModels #ArtificialIntelligence

The first session of oral presentations has concluded, very interesting talks from A. Himmelhuber (Siemens) and M. Couceiro (U. Lorraine) on the topics GNN explanations and tranferability of analogies learned via DNNs. #AIMLAI @ECMLPKDD 2021. #xai #interpretableML

jaom7's tweet image. The first session of oral presentations has concluded, very interesting talks from A. Himmelhuber (Siemens) and M. Couceiro (U. Lorraine) on the topics GNN explanations and tranferability of analogies learned via DNNs. #AIMLAI @ECMLPKDD 2021. #xai #interpretableML
jaom7's tweet image. The first session of oral presentations has concluded, very interesting talks from A. Himmelhuber (Siemens) and M. Couceiro (U. Lorraine) on the topics GNN explanations and tranferability of analogies learned via DNNs. #AIMLAI @ECMLPKDD 2021. #xai #interpretableML

Looking forward to the 1st #XAI day webinar tomorrow, Sept. 3rd, at @DIAL_UniCam. Many thanks to the speakers @grau_isel, Eric S. Vorm and @leilanigilpin who will talk about the essentials of #interpretableML and its applications.

amanuel_herrera's tweet image. Looking forward to the 1st #XAI day webinar tomorrow, Sept. 3rd, at @DIAL_UniCam. Many thanks to the speakers @grau_isel, Eric S. Vorm and @leilanigilpin who will talk about the essentials of #interpretableML and its applications.

#AIMLAI @ECMLPKDD 2021 has started. Right now Prof Zhou (@zhoubolei) from CUHK is giving a keynote highlighting the efforts of his team towards making deep AI models think like humans. #xai #interpretableml #ai

jaom7's tweet image. #AIMLAI @ECMLPKDD 2021 has started. Right now Prof Zhou (@zhoubolei) from CUHK is giving a keynote highlighting the efforts of his team towards making deep AI models think like humans. #xai #interpretableml #ai

Interpreting predictions made by a black box model with SHAP: bit.ly/3iO1vPT. Amazing work @scottlundberg! This kind of tool opens a whole new world of possibilities for practical AI applications. #interpretableML #MachineLearning #DataScience #SHAP #Python

vlgdata's tweet image. Interpreting predictions made by a black box model with SHAP: bit.ly/3iO1vPT. Amazing work @scottlundberg! This kind of tool opens a whole new world of possibilities for practical AI applications. #interpretableML #MachineLearning #DataScience #SHAP #Python

#ECDA2018 @hnfpb @RealGabinator thanks! Nice talk and nice overview about explanation methods in #DNN. #InterpretableML

birte_cs's tweet image. #ECDA2018 @hnfpb @RealGabinator thanks! Nice talk and nice overview about explanation methods in #DNN. #InterpretableML

Relying on XGBoost/RF feature importances to interpret your model? Read this first towardsdatascience.com/interpretable-… You are probably reporting misleading conclusions #XAI #interpretableML

leonardo_noleto's tweet image. Relying on XGBoost/RF feature importances to interpret your model? Read this first towardsdatascience.com/interpretable-… You are probably reporting misleading conclusions #XAI #interpretableML

I've created a short demo in #rstats on how to enforce monotonic constraints in H2O #AutoML / Stacked Ensembles. #InterpretableML #xai Thanks to @Navdeep_Gill_ for the partial dependence plots! @h2oai 👉 gist.github.com/ledell/91beb92…

ledell's tweet image. I've created a short demo in #rstats on how to enforce monotonic constraints in H2O #AutoML / Stacked Ensembles.  #InterpretableML #xai  Thanks to @Navdeep_Gill_ for the partial dependence plots! @h2oai 

👉 gist.github.com/ledell/91beb92…
ledell's tweet image. I've created a short demo in #rstats on how to enforce monotonic constraints in H2O #AutoML / Stacked Ensembles.  #InterpretableML #xai  Thanks to @Navdeep_Gill_ for the partial dependence plots! @h2oai 

👉 gist.github.com/ledell/91beb92…

Yes, we did it again :-) Benedikt Bönninghoff, Robert Nickel and I - again at 1st place in the 2021 PAN@CLEF author identification challenge pan.webis.de/clef21/pan21-w… #interpretableML #machinelearning @HGI_Bochum @CASA_EXC @ika_rub @ruhrunibochum


#KDD2019 keynote speaker @CynthiaRudin on #InterpretableML - recidivism models all perform about the same & Complicated models are preferred because they are profitable #explainableAI @kdd_news

vanimt's tweet image. #KDD2019 keynote speaker @CynthiaRudin on #InterpretableML - recidivism models all perform about the same & Complicated models are preferred because they are profitable #explainableAI  @kdd_news
vanimt's tweet image. #KDD2019 keynote speaker @CynthiaRudin on #InterpretableML - recidivism models all perform about the same & Complicated models are preferred because they are profitable #explainableAI  @kdd_news
vanimt's tweet image. #KDD2019 keynote speaker @CynthiaRudin on #InterpretableML - recidivism models all perform about the same & Complicated models are preferred because they are profitable #explainableAI  @kdd_news

Our paper "NoiseGrad: enhancing explanations by introducing stochasticity to model weights" has been accepted at #AAAI2022 🎉 See you (fingers crossed) in Canada 🇨🇦 arxiv.org/abs/2106.10185 #ML #InterpretableML #XAI

UMI_Lab_AI's tweet image. Our paper "NoiseGrad: enhancing explanations by introducing stochasticity to model weights" has been accepted at #AAAI2022 🎉
See you (fingers crossed) in Canada 🇨🇦

arxiv.org/abs/2106.10185
#ML #InterpretableML #XAI

Loading...

Something went wrong.


Something went wrong.


United States Trends