Machine learning (ML) has garnered significant interest in recent years due to its applicability to a wide range of complex problems. There is increasing realization that ML models, in addition to making predictions, reveal information about relationships between domain data items, commonly referred to as interpretability of the model. A similar situation is occurring in the artificial intelligence (AI) scientific community, which has concentrated on explainable AI (XAI) along the dimensions of algorithmic interpretability, explainability, transparency, and accountability of algorithmic judgments. ML approaches may be classified as white-box or black-box. White-box techniques, like rule learners and inductive logic programming, provide explicit models that are intrinsically interpretable, while black-box techniques, such as (deep) neural networks, provide opaque models. With the growing use of ML, there have been significant social concerns about implementing black-box models for decisions requiring the explanation of domain relationships. The ability to express information obtained from ML models in human-comprehensible language–aka interpretability–has sparked considerable attention in academics and industry. These interpretations have found applications in healthcare, transportation, finance, education, policymaking, criminal justice, etc. As it evolves, one aim in ML is the development of interpretable techniques and models that explain themselves and their output.
This special issue invites papers on advancements in interpretable ML from the modeling and learning perspectives. We are looking for high-quality, original articles presenting work on the following (not exhaustive) topics:
• Probabilistic graphical model applications
• Explainable artificial intelligence
• Rule learning for interpretable machine learning
• Interpretation of black-box models
• Interpretability in reinforcement learning
• Interpretable supervised and unsupervised models
• Interpretation of neural networks and ensemble-based methods
• Interpretation of random forests and other ensemble models
• Causality of machine learning models
• Novel applications requiring interpretability
• Methodologies for measuring interpretability of machine learning models
• Interpretability-accuracy trade-off and its benchmarks
Call for Papers Flyer: Interpretable Machine Learning and Explainable Artificial Intelligence