![]() We introduce an explanation generation algorithm for relational concepts learned with Inductive Logic Programming ( GeNME). Such near misses have been proposed by Winston (Learning structural descriptions from examples, 1970) as efficient guidance for learning in relational domains. Contrasting a given instance with a structurally similar example which does not belong to the concept highlights what characteristics are necessary for concept membership. ![]() However, as shown in cognitive science as well as in early AI research, concept understanding can also be improved by the alignment of a given instance for a concept with a similar counterexample. Often explanations are given in form of model simplifications or visualizations. In recent research, human-understandable explanations of machine learning models have received a lot of attention.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |