Deep learning has become so dominant in machine learning, and artificial intelligence (AI) as a whole, that its intrinsic lack of interpretability is paradigmatic for the whole field of explainable AI (XAI). This is why progress in interpreting, explaining, and visualizing deep learning is of utmost importance. This book is a collection of completely independent chapters, compiled as the result of a Neural Information Processing Systems (NIPS) 2017 workshop, featuring some of the leading experts in XAI for deep learning.The volume is structured into six parts, each with its own preface and chapter summaries, which are very helpful.
展开▼