interpreter
ðŸ§
Model analyzer using explainable AI (XAI) methods.
Functionality is built around the iNNvestigate
toolbox to analyze predictions of deep learning models.
Author: Simon M. Hofmann
Years: 2023-2024
analyze_model
ðŸ§
analyze_model(
model: Model,
ipt: ndarray,
norm: bool,
analyzer_type: str = "lrp.sequential_preset_a",
neuron_selection: int | None = None,
**kwargs
) -> ndarray
Analyze the prediction of a model with respect to a given input.
Produce an analyzer map ('heatmap') for a given model and input image. The heatmap indicates the relevance of each pixel w.r.t. the model's prediction.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model |
Model
|
Deep learning model. |
required |
ipt |
ndarray
|
Input image to model, shape: |
required |
norm |
bool
|
True: normalize the computed analyzer map to [-1, 1]. |
required |
analyzer_type |
str
|
Type of model analyzers [default: "lrp.sequential_preset_a" for ConvNets]. Check documentation of |
'lrp.sequential_preset_a'
|
neuron_selection |
int | None
|
Index of the model's output neuron [int], whose activity is to be analyzed; Or take the 'max_activation' neuron [if |
None
|
kwargs |
Additional keyword arguments for the |
{}
|
Returns:
Type | Description |
---|---|
ndarray
|
The computed analyzer map. |
Source code in src/xai4mri/model/interpreter.py
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
|