Interpretable Framework for Integrating Machine Learning and Knowledge Reasoning
Author:
Affiliation:

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Because the rules of the rule-based interpretability model may fail to reflect the exact decision-making situation of the model, the interpretability framework combining machine learning and knowledge reasoning is proposed. The framework evolves a target-feature result and a reasoning result, which implements interpretability when the two are the same and both reliable. The target-feature result is obtained directly by the machine learning model, while the reasoning result is acquired by sub-feature classification combined with rules for knowledge reasoning. Whether the two results are reliable is judged by calculating their credibility. A particular recognition case of cervical cancer cells for TCT image fusion learning and reasoning is used to verify the framework. Experiments demonstrate that the framework make model’s real decisions interpretable and improve classification accuracy during iteration. This helps people understand the logic of the system’s decision-making and the reason for its failure.

    Reference
    Related
    Cited by
Get Citation

李迪媛,康达周.融合机器学习与知识推理的可解释性框架.计算机系统应用,2021,30(7):22-31

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:October 21,2020
  • Revised:November 18,2020
  • Adopted:
  • Online: July 02,2021
  • Published:
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063