Perhaps I was wrong to expect mention of explicit discsussion of spectral graph theory? Anyway, always interesting topics to me.
On a related note: Gilbert Strang (MIT) has a relatively new class in the area of Linear Algebra (of course!) and learning: from data: "Matrix Methods in Data Analysis, Signal Processing, and Machine Learning"
> https://ocw.mit.edu/courses/mathematics/18-065-matrix-method...
You are most definitely not wrong. I would have expected some mention of the spectrum of a graph beyond simply defining what the eigenvalues are.
Incidentally, one interesting thing about the graph spectrum is that it's actually not a very interesting invariant. Among other things, almost all trees are cospectral, as proven in the following paper by Allen Schwenk in 1973: https://www.researchgate.net/publication/245264768_Almost_al...
> Among other things, almost all trees are cospectral
This does not really mean that the spectrum is not interesting, does it? Only that all trees have nearly the same spectrum. Sure, there's not much variability inside the class of trees, but almost no graphs are trees. After all, the matrix associated to a tree is basically a permutation of the identity.
You're right, of course. I should have said that the spectrum isn't all that useful for telling two graphs of the same order apart. Certain eigenvalues are frequently useful by themselves (e.g. the largest, second largest, and smallest eigenvalues often contain some information).
But, speaking of information, consider this: the adjacency matrix of a simple, loopless graph of order n is a symmetric n x n matrix with all zeroes on the diagonal. That means the adjacency matrix contains (literally) n^2/2 - n bits of information, while the number of connected graphs of order n as n -> infinity grows exponentially [0]. That implies that for the purpose of telling two graphs of the same order apart, the spectrum is of fairly limited utility, which is what I was thinking of when I wrote that it was not an interesting invariant.
I don't think that follows. Since the adjacency matrix contains O(n^2) _bits_ of information, the number of adjacency matrices of graphs is O(2^(n^2)). This makes sense: graphs with vertex labels {1,...,n} are in bijection with symmetric n x n binary matrices with zero diagonal. It's difficult to quantify how much information is lost when we reduce this matrix to its spectrum, since the eigenvalues are real numbers, but there are O(n) degrees of freedom, and in principle plenty of room to distinguish any two graphs.
Of course, as a sibling comment notes, unlabeled graphs are often more interesting, for which one has to look at equivalence classes of adjacency matrices. The spectrum is nice here because it is automatically invariant to vertex permutations and hence a graph isomorphism invariant. The existence of nonisomorphic cospectral graphs is not immediately obvious, although there are simple examples, and that there are infinite families of cospectral pairs is even less obvious. So at the very least, it's interesting that the spectrum is not a complete invariant, and that it does work well for certain classes of graphs.
I don't understand why you are linking to something about labelled graphs when people are (presumably) trying to distinguish between graphs up to isomorphism i.e. after modd'ing out the labeling.
The term "labeled graph" just means a graph with each node labeled differently (but arbitrarily). It just allows for reasoning & enumerating the vertex set. It's a typical assumption to make in the context of graph isomorphism.
It doesn't relate to machine learning (which is what I assume you mean).
Isomorphism of a pair of graphs usually refers to isomorphism of their unlabelled equivalents.
Yes the concrete expression of the isomorphism would be as a mapping between the labels.
Given that the paper linked to is by Brendan McKay et al, it seems reasonable to mention that nAUTy works by finding (efficiently) all permutations of the labellings that result in an automorphism of the graph.
> Perhaps I was wrong to expect mention of explicit discsussion of spectral graph theory?
I mean, this is a link to a pdf titled "chapter 3" buried deep in the directories of someone's academic homepage. There's no context about who the author is (is it a student project from a course? Part of a dissertation? notes for a draft of a book?), except that I think I can extract their name from the URL. It's pretty hard to tell, thus, who the desired audience is or what the aim of it is.
So whether one should expect one thing or another from this random pdf... I mean, without knowing more, what is there to reasonably expect?
For instance (and I can imagine this because I've taught an undergraduate course on graph theory, among other undergrad math courses), if these are notes from a course, then the instructor may have some good, context specific reasons for focusing on some topics while not going into details on others. But, as I said above, it's really impossible to know because this has just been plunked on HN with no context, as if that is somehow valuable.
I think the desired audience is the student, professor, and students' peers in University of Utah's MATH 2270 Linear Algebra class. If you walk up the URL, the context presents itself more.
And true, these projects have limitations/constraints, and I am happy that the author took LA and learned more about LA and Graph Theory. I didn't know what to expect, but assumed Graphs + LA ==> Let's talk about spectral graph theory -- well, because I hoped for it.
The laplacian of a matrix also has cool applications in computer graphics, if anyone is curious (about further connections). Think of the mesh of an object as a graph and compute the lapacian. (You get natural unsupervised segmentations of the mesh for instance if you look at it as a spectral clustering problem.) Look up resources by Keenan Crane. If you've played with 'smooth normals' in blender I believe it uses techniques like that as well.
On a related note: Gilbert Strang (MIT) has a relatively new class in the area of Linear Algebra (of course!) and learning: from data: "Matrix Methods in Data Analysis, Signal Processing, and Machine Learning" > https://ocw.mit.edu/courses/mathematics/18-065-matrix-method...
With Part IV: Special Matrices discussing the Laplacian and Distance matrices as noted in the posted paper. [TOC: https://math.mit.edu/~gs/learningfromdata/dsla_toc.pdf]