Section 19 Geometry in \(\mathbb{R}^n\)
19.1 Dot Product
We will look at the gometry of \(n\)-dimensional vectors. For example, here are three vectors in \(\mathbb{R}^6\).
We can compute the dot product two different ways. If you have included lit pracma library, you can use the dot command.
## [1] 79## [1] -1## [1] 0We can also compute the dot product in native R by multiplying \(u^T v\). It is important to remember that this works.
##      [,1]
## [1,]   79##      [,1]
## [1,]   -1##      [,1]
## [1,]    019.2 Length, Distance, Angle
The length of a vector can be computed using \(\sqrt{u\cdot u}\) or as the built in 2-norm of a vector. The reason it is the 2-norm is because we are squaring (second power) and taking the square root.
##          [,1]
## [1,] 9.539392## [1] 9.539392The distance between two vectors, is the length of the difference between them
##          [,1]
## [1,] 1.732051## [1] 1.732051The angle between two vectors is given by the formula
\[
\theta  = \arccos\left(\frac{ v \cdot w } {||v|| ||w||} \right)
\]
which can be comuted using arccosine function acos.
##           [,1]
## [1,] 0.1427914Sometimes we use the cosine of the angle between the two vectors (we will see an example of this in the homework) \[ \cos(\theta) = \frac{ v \cdot w } {||v|| ||w||} \] which is computed as
##           [,1]
## [1,] 0.9898226This is a number between -1 and 1, with numbers close to 1 meaning that they are closely aligned, numbers close to 0 meaning that they are close to orthogonal, and numbers close to -1 meaning that they are close to opposite.
##      [,1]
## [1,]    0##             [,1]
## [1,] -0.0534522519.3 Orthogonal Complement
The orthogonal complement of a vector space \(W\) is \[ W^\perp = \left\{ v \in \mathbb{R}^n \mid v \cdot w = 0 \text{ for every } w \in W \right\}. \] The orthogonal complement is a subspace. Furthermore, it is enough to check that \(w\) is orthogonal to a basis of \(W\). Tnat is, you don’t have to check every vector in \(W\); if you are orthogonal to the basis then you are orthogonal to \(W\).
For example, if \[ W = \mathsf{span} \left\{ \begin{bmatrix} 1\\2\\3\\4\\5 \end{bmatrix}, \begin{bmatrix} 1\\1\\1\\1\\1 \end{bmatrix}, \begin{bmatrix} 1\\2\\2\\2\\1 \end{bmatrix}, \begin{bmatrix} 3\\5\\6\\7\\7 \end{bmatrix}, \begin{bmatrix} 0\\2\\1\\0\\-4\end{bmatrix} \right\}, \] then we can put the vectors of \(W\) into the rows of a matrix. So in this case, we make the matrix
##      [,1] [,2] [,3] [,4] [,5]
## [1,]    1    2    3    4    5
## [2,]    1    1    1    1    1
## [3,]    1    2    2    2    1
## [4,]    3    5    6    7    7
## [5,]    0    2    1    0   -4Now, \(W\) is the row space of \(A\). That is, \(W = \mathsf{Row}(A)\). And the row space is orthogonal to the null space. Therefore \(W^\perp = \mathsf{Nul}(A)\), so we row reduce,
##      [,1] [,2] [,3] [,4] [,5]
## [1,]    1    0    0    0    1
## [2,]    0    1    0   -1   -4
## [3,]    0    0    1    2    4
## [4,]    0    0    0    0    0
## [5,]    0    0    0    0    0There are 2 free variables, so the null space and, thus, \(W^\perp\) are 2 dimensional. We describe a basis of the null space \[ W^\perp = \mathsf{Nul}(A) = \mathsf{span} \left\{ \begin{bmatrix} 0 \\ 1 \\ -2 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} -1 \\ 4 \\ -4 \\ 0 \\ 1 \end{bmatrix} \right\}. \] We can check that these vectors are orthogonal to \(W\) by multiplying
##      [,1] [,2]
## [1,]    0    0
## [2,]    0    0
## [3,]    0    0
## [4,]    0    0
## [5,]    0    0