Explainable AI

From P2P Foundation
Jump to navigation Jump to search

= "refers to applications of artificial intelligence (AI) whose actions can be understood and explained by humans". [1]

Description

"With computing power, new methods and algorithms become more widely available, Artificial Intelligence has become THE topic in and around data management. Huge amounts of (big) data are harvested and ingested into AI & cognitive computing engines to analyse, calculate patterns and prediction to enable powerful applications. One concern is that often these engines are “black boxes” including self-learning algorithms, furthermore, that input data is noisy and often not pre-selected along the requirements of the output. This leads to AI solutions that (i) do not provide useful results, (ii) provide applications that are not fulfilling the requirements and (iii) make it very difficult to explain the processes that have led to a certain outcome or decision.

Explainable AI or Transparent AI refers to applications of artificial intelligence (AI) whose actions can be understood and explained by humans. It contrasts with "black box" AIs that employ complex opaque algorithms, where even their designers cannot explain why the AI arrived at a specific decision. Explainable AI can be used to implement a right to explanation wherever such right exists. The technical challenge of explaining AI decisions is sometimes known as the interpretability problem (Source Wikipedia).

To enable Explainable AI and its full potential, semantic technologies can help - to provide better data quality, provide the possibility to configure the engine by making use of Knowledge Graphs, and finally to help AI engines to understand language and thereby ensure that context and meaning is taken into account to realise really useful data-driven AI applications for the future." (https://www.european-big-data-value-forum.eu/program/explainable-artificial-intelligence/)