ConvNets _are_ message-passing networks. It is easy to see that bitmaps can be seen as graphs, with pixels as nodes and connections to their 8 neighbors (and themselves). Treat every neighbor as a connection of a different type and you can build a ConvNet out of heterogeneous graph convolutions.
A 2D convolution operator is just an efficient implementation that takes this structure as a given and doesnât require the graph structure as another input.
This means that the basic arguments of the article no longer hold. Yes, in cases GNNs might be slower or harder to train, but it is not a general rule.
ConvNets _are_ message-passing networks. It is easy to see that bitmaps can be seen as graphs, with pixels as nodes and connections to their 8 neighbors (and themselves). Treat every neighbor as a connection of a different type and you can build a ConvNet out of heterogeneous graph convolutions.
A 2D convolution operator is just an efficient implementation that takes this structure as a given and doesnât require the graph structure as another input.
This means that the basic arguments of the article no longer hold. Yes, in cases GNNs might be slower or harder to train, but it is not a general rule.