How do you tackle the issue of explainability?


Explainable AI is not merely a buzzphrase, but a real concern among the AI application users, especially those dealing with mission-critical tasks.

Every aspect of Tisane’s decision-making process is fully inspectable, from the low-level functionality (e.g. part of speech tagging, morphological analysis) to the late-stage labeling. Currently, the debugging / customization framework is only available for the custom installations (private cloud).

For more information, read our post in Medium: Tisane: Mission-Critical AI and Explainability