ODSC Speakers 38/72

ODSC Speakers 38/72


LUNDBERG, SCOTT

Topic:  INTERPRETING PREDICTIONS FROM COMPLEX MODELS

Abstract:  a realistic prediction often implemented in many applications. However, with large modern datasets the best accuracy is often achieved by complex models even experts struggle to interpret, such as ensemble or deep learning models. This creates a tension between Accuracy and interpretability. In response, a variety of methods have recently been proposed to help users interpret the predictions of complex models. Here, we present a unified framework for interpreting predictions, namely SHAP  (SHapley Additive exPlanations), which assigns each feature important importance For a particular prediction. SHAP comes with strong theoretical guarantees and is applicable to any model.

This step is to be inconsistent, which means changing the model to rely on More on a given feature can actually reduce the important assigned to that feature. In contrast, SHAP values ​​are guaranteed to always be consistent and accurate accurate. Since SHap presented on the current state-of-the-art, it affects any current user of tree ensemble methods, or model agnostic explanation methods.

Bio:  PhD Student at the University of Washington CSE Dept