针对基于规则的可解释性模型可能出现的规则无法反映模型真实决策情况的问题, 提出了一种融合机器学习和知识推理两种途径的可解释性框架. 框架演进目标特征结果和推理结果, 在二者相同且都较为可靠的情况下实现可解释性. 目标特征结果通过机器学习模型直接得到, 推理结果通过子特征分类结果结合规则进行知识推理得到, 两个结果是否可靠通过计算可信度来判断. 使用面向液基细胞学检查图像的融合学习与推理的某类宫颈癌细胞识别案例对框架进行验证, 实验表明, 该框架能够赋予模型的真实决策结果以可解释性, 并在迭代过程中提升了分类精度. 这帮助人们理解系统做出决策的逻辑, 以及更好地了解结果可能失败的原因.
Because the rules of the rule-based interpretability model may fail to reflect the exact decision-making situation of the model, the interpretability framework combining machine learning and knowledge reasoning is proposed. The framework evolves a target-feature result and a reasoning result, which implements interpretability when the two are the same and both reliable. The target-feature result is obtained directly by the machine learning model, while the reasoning result is acquired by sub-feature classification combined with rules for knowledge reasoning. Whether the two results are reliable is judged by calculating their credibility. A particular recognition case of cervical cancer cells for TCT image fusion learning and reasoning is used to verify the framework. Experiments demonstrate that the framework make model’s real decisions interpretable and improve classification accuracy during iteration. This helps people understand the logic of the system’s decision-making and the reason for its failure.