Videos for Section 2.2


Let $\mathcal{B} = \{{\bf b}_1, {\bf b}_2, \ldots, {\bf b}_n\}$ be a collection of $n$ vectors in a vector space $V$. A linear combination of these vectors is any vector of the form $${\bf v} = a_1 {\bf b}_1 +a_2 {\bf b}_2 + \cdots + a_n {\bf b}_n.$$ The set of all linear combinations of the elements of $\mathcal{B}$ is called the span of $\mathcal{B}$, and denoted $Span(\mathcal{B})$. This is always a subspace of $V$. If $Span(\mathcal{B})=V$, then we say that $\mathcal{B}$ spans $V$.



We say that $\mathcal{B}$ is linearly independent if the only way to write the zero vector as a linear combination is as $0 {\bf b}_1 + 0 {\bf b}_2 + \cdots +0 {\bf b}_n $. In other words, $\mathcal{B}$ is linearly independent if $$0 = a_1 {\bf b}_1 +a_2 {\bf b}_2 + \cdots + a_n {\bf b}_n$$ implies that $a_1=a_2 = \cdots = 0$.

If $\mathcal{B}$ isn't linearly independent, we say that the collection is linearly dependent. As long as we have at least two vectors, this means that one of the vectors can be expressed as a linear combination of the others. (That's a theorem, not a self-evident fact. See the book for a proof.)



When $V={\bf R}^m$, we can view $\mathcal{B}$ as the columns of an $m \times n$ matrix $A$. The properties of the vectors are closely related to the rank of $A$.