天天看点

CVPR2020有关可解释性的文章

AMC-Loss: Angular Margin Contrastive Loss for Improved Explainability in Image Classification

  • 摘要:Deep-learning architectures for classification problems involve the cross-entropy loss sometimes assisted with auxiliary loss functions like center loss, contrastive loss and triplet loss. These auxiliary loss functions facilitate better discrimination between the different classes of interest. However, recent studies hint at the fact that these loss functions do not take into account the intrinsic angular distribution exhibited by the low-level and high-level feature representations. This results in less compactness between samples from the same class and unclear boundary separations between data clusters of different classes. In this paper, we address this issue by proposing the use of geometric constraints, rooted in Riemannian geometry. Specifically, we propose Angular Margin ontrastive Loss (AMC-Loss), a new loss function to be used along with the traditional cross-entropy loss. The AMC-Loss employs the discriminative angular distance metric that is equivalent to geodesic distance on a hypersphere manifold such that it can serve a clear geometric interpretation. We demonstrate the effectiveness of AMC-Loss by providing quantitative and qualitative results. We find that although the proposed geometrically constrained loss-function improves quantitative results modestly, it has a qualitatively surprisingly beneficial effect on increasing the interpretability of deep-net decisions as seen by the visual explanations generated by techniques such as the Grad-CAM. Our code is available at https://github.com/hchoi71/AMC-Loss.

    用于分类问题的深度学习体系包括交叉熵损失,有时借助于中心损失、对比损失和三重损失等辅助损失函数。这些辅助损失函数有助于更好地区分不同类别的利益。然而,最近的研究显示,这些损失函数没有考虑由低级和高级特征表示所表现出的内在角分布。这导致了同一类样本之间的聚类不紧密,不同类的数据簇之间的边界划分不清晰。在这篇论文中,我们通过提出使用Riemannian geometry中的几何约束来解决这个问题。特别地,我们提出了角边缘对比损失(AMC-Loss),一个新的损失函数用于传统的交叉熵损失。AMC-Loss采用判别式角距离度量,相当于超球面流形上的测地线距离,从而可以提供清晰的几何解释。我们通过提供定量和定性的结果来证明AMC-Loss的有效性。我们发现,虽然所提出的几何约束损失函数适度地改善了定量结果,但它在增加deep-net决策的可解释性方面具有令人惊讶的有益效果,这可以通过诸如梯度cam等技术产生的可视化解释看到。我们的代码可以在https://github.com/hchoi71/AMC-Loss上找到。

Response Time Analysis for Explainability of Visual Processing in CNNs

  • 摘要:Explainable artificial intelligence (XAI) methods rely on access to model architecture and parameters that is not always feasible for most users, practitioners, and regulators. Inspired by cognitive psychology, we present a case for response times (RTs) as a technique for XAI. RTs are observable without access to the model. Moreover , dynamic inference models performing conditional computation generate variable RTs for visual learning tasks depending on hierarchical representations. We show that MSDNet, a conditional computation model with early-exit architecture, exhibits slower RT for images with more complex features in the ObjectNet test set, as well as the human phenomenon of scene grammar , where object recognition depends on intrascene object-object relationships. These results cast light on MSDNet’s feature space without opening the black box and illustrate the promise of RT methods for XAI.

    可解释人工智能(XAI)方法依赖于对模型架构和参数的访问,而对于大多数用户、从业者和监管机构来说,这并不总是可行的。受认知心理学的启发,我们将反应时间(RTs)作为XAI的一种技术。RTs是不需要访问模型的可观察对象。此外,执行条件计算的动态推理模型根据层次表示为视觉学习任务生成变量RTs。我们发现MSDNet,一个具有早期退出体系结构的条件计算模型,对ObjectNet测试集中具有更复杂特征的图像表现出较慢的RT,以及场景语法的人类现象,其中对象识别依赖于内上升的对象-对象关系。这些结果在不打开黑盒的情况下揭示了MSDNet的特征空间,并说明了RT方法对XAI的承诺。

Score-CAM:Score-Weighted Visual Explanations for Convolutional Neural Networks

  • 摘要:Recently, increasing attention has been drawn to the internal mechanisms of convolutional neural networks, and the reason why the network makes specific decisions. In this paper , we develop a novel post-hoc visual explanation method called Score-CAM based on class activation mapping. Unlike previous class activation mapping based approaches, Score-CAM gets rid of the dependence on gradients by obtaining the weight of each activation map through its forward passing score on target class, the final result is obtained by a linear combination of weights and activation maps. We demonstrate that Score-CAM achieves better visual performance and fairness for interpreting the decision making process. Our approach outperforms previous methods on both recognition and localization tasks, it also passes the sanity check. We also indicate its application as debugging tools.

    近年来,人们越来越关注卷积神经网络的内部机制,以及网络做出具体决策的原因。在本文中,我们开发了一种新的基于类激活映射的post-hoc视觉解释方法——Score-CAM。与以往基于类激活映射的方法不同,Score-CAM摆脱了对梯度的依赖,通过每个激活映射对目标类的前向通过得分来获得权重,最终的结果是权重与激活映射的线性组合。我们证明,Score-CAM在解释决策过程方面取得了更好的视觉效果和公平性。我们的方法在识别和定位任务上都优于以前的方法,它也通过了健全性检查。我们还指出了它作为调试工具的应用。