When working with Graph Neural Networks (GNNs), understanding how they make predictions is crucial. In my research project, I’m building Direct Follows Graphs (DFGs) from event logs, where each node represents an activity. My goal is to use GNNs for next activity prediction at the node level. But, I’m stuck on explaining these predictions, especially when dealing with ‘linear’ DFGs.
I’ve been wondering if it’s sensible to search for subgraphs that explain the result using techniques like GNNExplainer or SubgraphX. Since the DFGs are mostly linear with some self-loops or a few normal loops, wouldn’t the prediction be fully explained by the 3-hop neighborhood in a 3-layer GNN? These graphs aren’t massive, so maybe I’m overlooking something.
As a newcomer to the world of GNNs, I’d love some guidance from experts in the field. Have you encountered similar challenges? How did you approach explaining GNN predictions on linear DFGs?