Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes.

For a linear system (of any kind of equivalence) within an N-dimensional vector space, the Eigenvalues represent the scaling factors across those dimensions when the system's state is represented by an NxN sparse-diagonalized matrix (i.e. all values are 0 except for the main diagonal).

Those non-zero values along the main diagonal are its Eigenvalues and its rows are Eigenvectors.

For the common 3D isometric (e.g. xyz) coordinate system, the Eigenvalues can be thought of as a kind of multiplier across the unit vectors (Eigenvectors) [[1,0,0][0,1,0][0,0,1]]. This is the "stretching" analog mentioned in the article.

FWIW, Eigenvalues are not just a salient property of linear systems (i.e. matrices), but also of higher-order tensors.

Finding (or more-often approximating) these "characteristic scaling states" is a critical step in numerical analysis in everything from quantum mechanics, financial hedging strategies, and even consumer product marketing plans.

If you've ever represented a system as a series of Markov probability chains, every row of the "convergent/dominant" (if any) state contains an Eigenvalue.



I'm afraid there are several errors in that.

1. A linear mapping is not a "kind of equivalence" by any reasonable definition. For instance, the function that maps every vector to 0 is a linear mapping, and it has plenty of eigenvectors. (All with eigenvalue 0.)

2. The eigenvectors are not the rows of the diagonalized matrix. They are the rows (or columns, depending on just how you define things) of the matrix that does the coordinate transformation to diagonalize your matrix.

3. I think the paragraph beginning "For the common 3D isometric ..." is rather confused. I certainly am when reading it. Perhaps the problem is in my brain; what exactly do you mean? (Here's the nearest true thing I can think of to what that paragraph says: For many, but not all, linear transformations from a space to itself, there is an orthogonal coordinate system with respect to which the transformation's matrix is diagonal; then the eigenvectors are the axes of that coordinate system, and the eigenvalues are the amounts by which vectors along those axes get stretched.)

4. Tensors are just as linear as matrices. (You can do nonlinear things with a tensor, but then you can with a matrix too.)

5. The probabilities in a Markov chain's stationary state are not eigenvalues. (Well, I'm sure they're eigenvalues of something; any set of numbers can be made the eigenvalues of something; but they aren't, e.g., eigenvalues of the transition matrix.) What you may be thinking of is: a stationary state of a Markov chain is an eigenvector of the transition matrix, with eigenvalue 1; all eigenvalues have absolute value at most 1; if the Markov chain is ergodic (i.e., can get from any state to any other), then there is exactly one stationary state, exactly one eigenvector of eigenvalue 1, and all the other eigenvalues are strictly smaller. This is enough to guarantee convergence to the stationary state.

Also: If eigenvalues are as important in dealing with higher-order tensors as they are for the second-order case (i.e., matrices) then that's news to me. Tell me more?


Incidentally: that Markov chain thing? Suppose you make a Markov chain out of all the world's web pages, where each step moves you from a page to a randomly selected page it links to, or (occasionally for a page that has links, and unconditionally for a page with no links) to a completely random page.

Then the entries in that unique eigenvector -- equivalently, the long-term probabilities of landing on each page -- are basically the Google pagerank. (I'm sure Google's pagerank computation has lots of tweaks in it, and they certainly consider things other than pagerank to determine their search results. But pagerank was their original key idea.)


Days late, but worth the response for accuracy's sake since I just popped in on HN now:

> 1. A linear mapping is not a "kind of equivalence" by any > reasonable definition.

By definition, ANY mapping is an equivalence relation--even if that relation results in a 0 or NAN value.

> 2. The eigenvectors are not the rows of the diagonalized > matrix...

Correct. It depends on one's orientation (columns vs. rows), but the matrix which does the coordinate transforms contains the eigenvectors. Bad wording on my part. The key behind diagonalization is it removes any orientation issues from the relationship by establishing eigenweights across a given vector space.

> 3. I think the paragraph beginning "For the common 3D isometric ..."

Your interpretation is correct and, indeed, my choice of verbiage was poor. I really should have used the words "ortho-normal to a given vector space" which collapses to the common XYZ unit vectors in a linearly partitioned vector space of 6 degrees of freedom (3 translations & 3 rotations)--e.g. classic Cartesian space.

> Tensors are just as linear as matrices.

Most tensor fields are modeled using linear approximations (i.e. matricies of n-dimensions), but the very fact that a tensor itself is being used in the characteristic equations is typically indicative of non-linear behavior in the overall system. For example, in fluid dynamics used to model airflow across a wing or boundary values issues when the Cauchy stress tensor is used for structures undergoing plastic deformation.

I believe that you are conflating Manifolds with Tensors. The latter is a refinement and/or characteristic relation defined upon the former. A non-linear Tensor is defined upon a manifold with one or more non-linear relations. Perhaps you are used to dealing exclusively with metric tensors?

> The probabilities in a Markov chain's stationary state are not eigenvalues.

Here you are spot-on. Indeed it's the eigenvalues of the transition matrix (or convergence for ergodic ones) to which I was referring. Thanks (again) for the correction and further clarification.


> 1. A linear mapping is not a "kind of equivalence" by any reasonable definition. For instance, the function that maps every vector to 0 is a linear mapping, and it has plenty of eigenvectors. (All with eigenvalue 0.)

Linear mappings are the homomorphisms between vector spaces. So they are a "kind of equivalence".

See for example http://en.wikibooks.org/wiki/Linear_Algebra/Definition_of_Ho...


The zero mapping is a homomorphism. I would not call it a "kind of equivalence"; would you?


It's a special (and quite trivial) "kind of equivalence".

See http://en.wikipedia.org/wiki/Equivalence_class


I know what an equivalence class is, thank you. How does that make the map from (say) R^3 to itself that sends everything to 0 a "kind of equivalence"?

It gives rise to an equivalence relation on R^3 in a fairly natural way, as indeed any function gives rise to an equivalence relation on its domain: x~y iff f(x)=f(y). But that doesn't mean that it is a "kind of equivalence".

Now, obviously, "kind of" is vague enough that saying "an endomorphism of a vector space simply Is Not a 'kind of equivalence'" would be too strong. But I would like to know what, exactly, you mean by calling something a "kind of equivalence", because I'm unable to think of any meaning for that phrase that (1) seems sensible to me and (2) implies that endomorphisms of vector spaces are "kinds of equivalence".

(Looking back at what bonsaitree wrote, I see s/he said "of any kind of equivalence", which I unconsciously typo-corrected to "or any kind of equivalence". But perhaps I misunderstood and bonsaitree meant something else, though I can't think what it might be. bt, if you're reading this: my apologies if I misunderstood, and would you care to clarify if so?)


Yes, any function gives rise to equivalence classes. And linear functions give rise to equivalence classes that preserve structure in vector spaces.

(I guess our discourse has reached its end of usefulness here.)


Yes. Thanks for the correct re-wording as "or" and "equivalence class".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: