top of page

Layer-Wise Relevance  Propagation (LRP)

Layer-wise Relevance Propagation (LRP) is a model-agnostic approach for explaining the predictions of deep neural networks. The goal of LRP is to assign relevance scores to each input feature or neuron in the network, indicating its contribution to the output prediction.

LRP works by recursively propagating relevance scores from the output layer to the input layer of the network. At each layer, the relevance scores are redistributed to the input neurons based on their contribution to the output activation of the layer. This redistribution is performed using a set of propagation rules that ensure that the sum of the relevance scores at each layer is conserved.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Source: Explaining nonlinear classification decisions with deep Taylor decomposition

The basic idea behind LRP is to attribute the output relevance of a neuron or feature to the neurons or features that contributed to its activation. This is done by back-propagating the output relevance through the network using a set of propagation rules that take into account the type of activation function used at each layer.

 

The key advantage of LRP is that it provides a principled way to decompose the output prediction of a deep neural network into its constituent parts, allowing for a more fine-grained analysis of the network's behavior. LRP has been used in a variety of applications, including image classification, speech recognition, and natural language processing, and has been shown to be effective in identifying relevant input features and detecting potential sources of bias or error in deep neural networks.

catLRP.jpg
Technical Resources

Layer-Wise Relevance Propagation: An Overview by Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Wojciech Samek & Klaus-Robert Müller 

Explaining nonlinear classification decisions with deep Taylor decomposition by Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Wojciech Samek & Klaus-Robert Müller

InDepth: Layer-Wise Relevance Propagation by Eugen Lindwurm

bottom of page