Efficient Explainable Learning on Knowledge Graphs (ENEXA) is a European project that develops human-centred, explainable approaches to machine learning for real-world knowledge graphs.
Human-centred, transparent and explainable AI systems are key to human-centred and ethical development of digital and industrial solutions. ENEXA builds on novel and promising results in the fields of knowledge representation and machine learning to develop scalable, transparent and explainable hybrid machine learning algorithms that combine symbolic and sub-symbolic learning. The project focuses on knowledge graphs with rich semantics as a mechanism for knowledge representation, as they are gaining popularity in various fields and industries in Europe.
Some explainable and transparent machine learning approaches for knowledge graphs are already known to provide guarantees on their completeness and correctness. However, due to the size, incompleteness and inconsistency of knowledge graphs in practice, it is still impossible or impractical to apply them to real-world data.
ENEXA is developing new machine learning approaches that maintain formal guarantees of completeness and correctness while simultaneously utilising different representations (formal logic, embeddings and tensors) of knowledge graphs. With our new methods, we aim to achieve significant advances in the scalability of machine learning, especially for knowledge graphs. An important innovation of ENEXA lies in its approach to explainability. Here we focus on developing human-centric explainability techniques based on the concept of co-construction, where humans and machines enter into a dialogue to jointly develop explanations that humans can understand. The resulting approach will be applied in three important fields for Europe, namely business services, geospatial data analysis and brand marketing.
More information about the project of our Data Science workgroup can be found here.