Interpretable Framework for Integrating Machine Learning and Knowledge Reasoning

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments

    Because the rules of the rule-based interpretability model may fail to reflect the exact decision-making situation of the model, the interpretability framework combining machine learning and knowledge reasoning is proposed. The framework evolves a target-feature result and a reasoning result, which implements interpretability when the two are the same and both reliable. The target-feature result is obtained directly by the machine learning model, while the reasoning result is acquired by sub-feature classification combined with rules for knowledge reasoning. Whether the two results are reliable is judged by calculating their credibility. A particular recognition case of cervical cancer cells for TCT image fusion learning and reasoning is used to verify the framework. Experiments demonstrate that the framework make model’s real decisions interpretable and improve classification accuracy during iteration. This helps people understand the logic of the system’s decision-making and the reason for its failure.

    Cited by
Get Citation


Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
  • Received:October 21,2020
  • Revised:November 18,2020
  • Adopted:
  • Online: July 02,2021
  • Published:
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a)
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063