Model Interpretation

Publikation: Bidrag til bog/antologi/rapportBidrag til bog/antologiForskningfagfællebedømt

The increasing availability of data and software frameworks to create predictive models has allowed the widespread adoption of machine learning in many applications. However, high predictive performance of such models often comes at the cost of interpretability. Machine learning interpretation methods can be useful for several purposes: 1) gaining global insights into a model (e.g., feature importance); 2) model improvement if flaws were identified (e.g., unexpected reliance on a certain feature); 3) understanding individual predictions. Several model-agnostic methods have been developed including feature permutation, Shapleys, and LIME. This chapter presents the packages iml, counterfactuals, and DALEX, which implement model-agnostic interpretation methods. Throughout the chapter an xgboost is trained on the german credit dataset to understand how predictions are made and why. The chapter starts with discussing the iml package and the theory behind the discussed methods, as well as how to practically use the interface. It then moves to counterfactuals and the benefits of counterfactual analysis, including methods What-If and MOC. Finally, DALEX is introduced, which includes similar methods to iml but with a different design, hence users can make use of either package depending on their design preference.

OriginalsprogEngelsk
TitelApplied Machine Learning Using mlr3 in R
RedaktørerBernd Bischl, Raphael Sonabend, Lars Kotthoff, Michel Lang
Antal sider24
ForlagCRC Press
Publikationsdato2024
Sider259-282
Kapitel12
ISBN (Trykt)978-1-032-51567-0, 978-1-032-50754-5
ISBN (Elektronisk)978-1-003-40284-8
DOI
StatusUdgivet - 2024

Bibliografisk note

Publisher Copyright:
© 2024 selection and editorial matter, Bernd Bischl, Raphael Sonabend, Lars Kotthoff, and Michel Lang. All rights reserved.

ID: 390293726