Aug 20, 2021 | Shaoni Ghosh
Deep Learning algorithms come under artificial intelligence (AI) work that emulates the intricate workings of the human brain. It is also referred to as a deep neural network. It falls under the category of machine learning in artificial intelligence.
The drawback of deep learning models lies in its inability to come up with human-understandable motivations for their complicated decision-making procedures.
(Must Check: Top 10 Deep Learning Applications)
In a paper named 'Logic Explained Networks', a research team from Università di Firenze, Università di Siena, University of Cambridge and Universitè Côte d’Azur introduced a traditional approach to explainable artificial intelligence through a medium of interpretable deep learning models known as Logic Explained Networks (LENs).
The team gives an outline of their contributions towards explainable AI. The first and foremost point is to generalize the neural methods that were already present. These neural methods are employed in order to solve and explain categorical learning problems.
Secondly, to estimate how the users may interrelate LENs and the expression of a block of references in order to receive one or more customized descriptions. The third point is to exhibit an extensive range of explanations that are based on logic. The fourth is to report their results after deploying three out-of-the-box preset LENs.
This highlights how the generalization proves better with respect to model accuracy and the last point is to advertise "our public implementation of LENs through a Python package3 with extensive documentation about LENs models, implementing different trade-offs between interpretability/explainability and accuracy."
The previous research work showed that the way to interpret human-understandable explanations can be done through the use of formal language, for instance- FOL i.e. First-Order Logic.
When compared to other techniques, the logic-based explanations are shrouded in equivocacy. And there remains a constant quantitative check in regard to their correctness and completeness.
The logic formula can also be used to inspect generality with respect to its quantitative metrics such as accuracy, consistency and many more.
According to SyncedReview, "Inspired by the benefits of logic-based explanations, the proposed LENs are trained to solve and explain a categorical learning problem by integrating elements from deep learning and logic. LENs can be directly interpreted by means of a set of FOL formulas…"
The team introduced a few experimental research works, particularly on three out-of-the-box preset LENs: µ network, ψ network and ReLU network. The experiments resulted in the suggested viewpoint's balanced trade-off ranging between two points: interpretability and accuracy.