AI interpretability is at the heart of the debate surrounding the massive implementation of applications and
tools based on Artificial Intelligence and Machine Learning. However, the
concept of interpretability is often used to refer to very different, and sometimes even
antinomic, issues. This confusion can be explained by the absence of a precise and
consensual definition of the notion of interpretability. But also, and above all, by the multitude of issues associated
with interpretability: behind a single notion, we find different players and very diverse
expectations.
By sharing its research on interpretability through a series of three guides, ISoft's objective
is to demystify the subject of interpretability and provide a synthetic vision of it.
In this first guide, "the challenge of trust", we propose to take a step back from the
notion of AI interpretability, and place it in a broader context: that of
trust in technical progress. This is by no means a new issue, and
its foundations can be traced back to antiquity.
Feel free to download the ebook here: https: //advanthink.com/vous-etes/telechargement/?document=Banque%20de%20d%C3%A9tail%20%3A%20prot%C3%A9ger%20tous%20vos%20moyens%20de%20paiement