torch_geometric.explain.metric.unfaithfulness
- unfaithfulness(explainer: Explainer, explanation: Explanation, top_k: Optional[int] = None) float[source]
Evaluates how faithful an
Explanationis to an underyling GNN predictor, as described in the “Evaluating Explainability for Graph Neural Networks” paper.In particular, the graph explanation unfaithfulness metric is defined as
\[\textrm{GEF}(y, \hat{y}) = 1 - \exp(- \textrm{KL}(y || \hat{y}))\]where \(y\) refers to the prediction probability vector obtained from the original graph, and \(\hat{y}\) refers to the prediction probability vector obtained from the masked subgraph. Finally, the Kullback-Leibler (KL) divergence score quantifies the distance between the two probability distributions.
- Parameters
explainer (Explainer) – The explainer to evaluate.
explanation (Explanation) – The explanation to evaluate.
top_k (int, optional) – If set, will only keep the original values of the top-\(k\) node features identified by an explanation. If set to
None, will useexplanation.node_maskas it is for masking node features. (default:None)