- #1
ciccio
- 1
- 0
Hi all
I was reading this paper about spectral clustering:
Ng et all. (Nips 2001) http://ai.stanford.edu/~ang/papers/nips01-spectral.pdf
Short question:
What is the connection between the eigenvalues of a Laplacian?
Long question:
I got stuck on why the process makes sense.
Substantially the idea is that some data cannot be simply represented by distribution, but rather as vertices of a graph, where something simpler than graph-cut can be employed.
The "something simpler" is to construct a Laplacian matrix from the graph and then use
the eigenvectors of this matrix as new data and then perform a traditional clustering
algorithm. In this way the data are more "manegeable".
The part where I got stuck is why the eigenvectors are more manegeable?
In a survey about methods for spectral clustering (von Luxburg 2007), when they talk about this paper they say something like: "hence the graph laplacian measures the variation of
a function along a graph: the eingenvalue is low if close points have close function value".
this anyway does not explain why a classification using eigenvectors is easier?
In other words, does the eigenvalues capture the local "shape variation"?
This is what I gather from
http://en.wikipedia.org/wiki/Hearing_the_shape_of_a_drum
and reading other stuff about the eigenvalues of the laplace-beltrami
I was reading this paper about spectral clustering:
Ng et all. (Nips 2001) http://ai.stanford.edu/~ang/papers/nips01-spectral.pdf
Short question:
What is the connection between the eigenvalues of a Laplacian?
Long question:
I got stuck on why the process makes sense.
Substantially the idea is that some data cannot be simply represented by distribution, but rather as vertices of a graph, where something simpler than graph-cut can be employed.
The "something simpler" is to construct a Laplacian matrix from the graph and then use
the eigenvectors of this matrix as new data and then perform a traditional clustering
algorithm. In this way the data are more "manegeable".
The part where I got stuck is why the eigenvectors are more manegeable?
In a survey about methods for spectral clustering (von Luxburg 2007), when they talk about this paper they say something like: "hence the graph laplacian measures the variation of
a function along a graph: the eingenvalue is low if close points have close function value".
this anyway does not explain why a classification using eigenvectors is easier?
In other words, does the eigenvalues capture the local "shape variation"?
This is what I gather from
http://en.wikipedia.org/wiki/Hearing_the_shape_of_a_drum
and reading other stuff about the eigenvalues of the laplace-beltrami