Skip to content

interpreter 🧠

Model analyzer using explainable AI (XAI) methods.

Functionality is built around the iNNvestigate toolbox to analyze predictions of deep learning models.

Author: Simon M. Hofmann
Years: 2023-2024

analyze_model 🧠

analyze_model(
    model: Model,
    ipt: ndarray,
    norm: bool,
    analyzer_type: str = "lrp.sequential_preset_a",
    neuron_selection: int | None = None,
    **kwargs
) -> ndarray

Analyze the prediction of a model with respect to a given input.

Produce an analyzer map ('heatmap') for a given model and input image. The heatmap indicates the relevance of each pixel w.r.t. the model's prediction.

Parameters:

Name Type Description Default
model Model

Deep learning model.

required
ipt ndarray

Input image to model, shape: [batch_size: = 1, x, y, z, channels: = 1].

required
norm bool

True: normalize the computed analyzer map to [-1, 1].

required
analyzer_type str

Type of model analyzers [default: "lrp.sequential_preset_a" for ConvNets]. Check documentation of iNNvestigate for different types of analyzers.

'lrp.sequential_preset_a'
neuron_selection int | None

Index of the model's output neuron [int], whose activity is to be analyzed; Or take the 'max_activation' neuron [if None]

None
kwargs

Additional keyword arguments for the innvestigate.create_analyzer() function.

{}

Returns:

Type Description
ndarray

The computed analyzer map.

Source code in src/xai4mri/model/interpreter.py
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
def analyze_model(
    model: keras.Model,
    ipt: np.ndarray,
    norm: bool,
    analyzer_type: str = "lrp.sequential_preset_a",
    neuron_selection: int | None = None,
    **kwargs,
) -> np.ndarray:
    """
    Analyze the prediction of a model with respect to a given input.

    Produce an analyzer map ('heatmap') for a given model and input image.
    The heatmap indicates the relevance of each pixel w.r.t. the model's prediction.

    :param model: Deep learning model.
    :param ipt: Input image to model, shape: `[batch_size: = 1, x, y, z, channels: = 1]`.
    :param norm: True: normalize the computed analyzer map to [-1, 1].
    :param analyzer_type: Type of model analyzers [default: "lrp.sequential_preset_a" for ConvNets].
                          Check documentation of `iNNvestigate` for different types of analyzers.
    :param neuron_selection: Index of the model's output neuron [int], whose activity is to be analyzed;
                             Or take the 'max_activation' neuron [if `None`]
    :param kwargs: Additional keyword arguments for the `innvestigate.create_analyzer()` function.
    :return: The computed analyzer map.
    """
    # Create analyzer
    disable_model_checks = kwargs.pop("disable_model_checks", True)
    analyzer = innvestigate.create_analyzer(
        name=analyzer_type,
        model=model,
        disable_model_checks=disable_model_checks,
        neuron_selection_mode="index" if isinstance(neuron_selection, int) else "max_activation",
        **kwargs,
    )

    # Apply analyzer w.r.t. maximum activated output-neuron
    a = analyzer.analyze(ipt, neuron_selection=neuron_selection)

    if norm:
        # Normalize between [-1, 1]
        a /= np.max(np.abs(a))

    return a