The construction of Knowledge bases requires quite often the intervention of knowledge engineering and domain experts, resulting in a time consuming task. Alternative approaches have been developed for building knowledge bases from existing sources of information such as web pages and crowdsourcing; seminal examples at this regards are NELL, DBPedia, YAGO and several others. With the goal of building very large sources of knowledge, as recently for the case of Knowledge Graphs, even more complex integration processes have been set up, involving multiple sources of information, human expert intervention, crowdsourcing. Despite significant efforts for making Knowledge Graphs as comprehensive and reliable as possible, they tend to suffer of incompleteness and noise, due to the complex building process. Nevertheless, even for highly human curated knowledge bases, cases of incompleteness can be found, for instance with disjointness axioms missing quite often.
Machine learning methods grounded on inductive approaches have been proposed with the purpose of refining, enriching, completing and possibly raising potential issues in existing knowledge bases while showing the ability to cope with noise. This talk will present classes of mostly symbol-based machine learning methods, specifically focusing on concept learning, rule learning and disjointness axioms learning problems, showing how they can be exploited for enriching existing knowledge bases. During the talk it will be highlighted as a key element of the illustrated solutions is represented by the integration of: background knowledge, deductive reasoning and the evidence coming from the mass of the data. The talk will conclude with the presentation of an idea for injecting background knowledge into numeric-based approaches and specifically embedding models for predictive tasks on Knowledge Graphs.
Back to Program