How would this compare to common types of graph neural nets?
Afaict many use random walks (=traces) to learn their model, except they are typically then used for a classifier over individual nodes/edges instead of paths/ graphs. I'm not sure of the natural formulation to reuse them here. Likewise, it is unclear if node reuse on a path here is meant to be meaningful, which would also seem to change natural encodings.
I've only just started looking into classifying graphs. I'm not really qualified to answer this question, but I will try anyway. Let's assume we are talking about un-directed graphs using deep learning jargon.
DKM would not be a random walk through the graph. Features would be assigned to weights using a graph alignment algorithm, which is NP-hard. There's probably multiple ways to define graph alignment, so you have to pick an alignment strategy that makes sense given the problem. For example, we may want to reuse the same weights whenever the graph forks, ensuring each branch is treated the same way. Once we have assigned features to weights, we can classify the graph with only a single neuron. We can also use a deep model, if we choose.
Some graph neural networks reduce the graph in a feed-forward manner. These models, which can only represent directed interactions between nodes, are inappropriate for un-directed graphs.
Some graph neural networks require multiple neurons to represent the graph, in contrast to DKM, which can use only a single neuron.
There are types of graph neural networks that I have yet to understand, so I cannot compare these models to DKM.
Feel free to email (https://news.ycombinator.com/user?id=jostmey) if you want to discuss this more. I would be happy to provide insight if you want to implement a graph-DKM model.
Afaict many use random walks (=traces) to learn their model, except they are typically then used for a classifier over individual nodes/edges instead of paths/ graphs. I'm not sure of the natural formulation to reuse them here. Likewise, it is unclear if node reuse on a path here is meant to be meaningful, which would also seem to change natural encodings.