An operator (or matrix) is called Hermitian if it equals its adjoint (or conjugate-transpose, aka Hermitian conjugate). Hermitian operators H:V→V have some wonderful properties:
In the second video, we provide 3 proofs that H is diagonalizable. The first two proofs are repeated below the video. The third proof, by induction on the dimension of V, is found in the book. In the video we also show that any diagonalizable operator with real eigenvalues and an orthogonal basis of eigenvectors must be Hermitian.
First proof, using power vectors: If H isn't diagonalizable, then we can still find a basis of V consisting of power vectors. This means that there must be a power vector x of H of degree 2 that isn't an eigenvector. In other words, (H−λI)2x=0 but (H−λI)x≠0.
Second proof, using perturbations: Suppose H is Hermitian and non-diagonalizable. We can perturb H into a matrix Hϵ that is still Hermitian, but whose characteristic polynomial has distinct roots. Hϵ is then diagonalizable. Since Hϵ is Hermitian, the eigenvectors of Hϵ are orthogonal. As ϵ→0, the eigenvalues and eigenvectors of Hϵ approach eigenvalues and eigenvectors of H. Since the eigenvectors of Hϵ are orthogonal, they must all approach different limits. (Strictly speaking we need to invoke compactness and subsequences to show that these limits exist) So H has as many eigenvectors as Hϵ, and so is diagonalizable.
This argument depends crucially on the fact that the eigenvectors of a Hermitian operator (with different eigenvalues) are orthogonal. If you try to apply this argument to a non-Hermitian matrix such as A=(0100), it doesn't work. The perturbed matrix Aϵ=(01ϵ20) has eigenvectors (1±ϵ) that are almost parallel, and as ϵ→0 both eigenvectors have the same limit.