Not quite. For a Markov probability matrix, 1 is always an eigenvalue, and all other eigenvalues are less than or equal to 1. For each eigenvalue that is equal to 1 you get a long term stable state probability. These distributions contain disjoint subsets of the states, and the system will converge to one of those subsets, depending on the initial state. The eigenvalues that are strictly less than 1 do not add any information to the long term state of the system.
See Stochastic Processes and Their Applications, V4 (1976) pages 253-259. I wrote it while still in grad school.
The values associated with each vertex on the _dominant eigenvector_ (the eigenvector associated with the dominant eigenvalue) are the long-term stable state probabilities. That's from a single eigenvector, not "the eigenvectors".
The the eigenvectors will be the long term stable state probabilities.