Section 29 Quiz 3 Review

29.1 Overview

Our third quiz covers eigenvectors and orthogonality. This covers the following sections:

  • 5.1: Eigenvectors and eigenspaces

  • 5.2: Eigenvalues

  • 5.3: Eigenbasis and diagonalization

  • 5.5: Complex eigenvalues: rotation and dilation

  • 5.6: Dynamical systems

  • 6.1: Orthogonality: length, angle, distance, dot product, and the orthogonal complement

  • 6.2: Orthogonal sets and orthogonal matrices

  • 6.3: Orthogonal projections

  • 6.5: Least-squares projections

The best way to study is to do practice problems. The Quiz will have calculation problems (like Edfinity) and more conceptual problems (like the problem sets). Here are some ways to practice:

  • Make sure that you have mastered the Vocabulary, Skills and Concepts listed below.
  • Look over the Edfinity homework assingments
  • Redo the Jamboard problems.
  • Look at Jamboard problems that your group didn’t get to.
  • Try to resolve the Problem Sets and compare your answers to the solutions.
  • Do the practice problems below. Compare your answers to the solutions.

29.1.1 Vocabulary, Concepts and Skills

See the Week 5-6 Learning Goals and the Week 7-8 Learning Goals for the list of vocabulary, concepts and skills.

29.2 Practice Problems

29.2.1

Consider the \(3 \times 3\) matrix \[ A = \left[ \begin{array}{rrr} 2 & -1 & 0 \\ 0 & 1 & 0 \\ -2 & 5 & -2 \\ \end{array} \right] \] with characteristic equation \[ p(\lambda) = -(\lambda -1)(\lambda -2)(\lambda +2). \] Find the eigenvalues and corresponding eigenvectors for \(A\).

29.2.2

Let \(A\) be a \(2 \times 2\) matrix. We view \(A\) as a linear transformation from \(\mathbb{R}^2\) to \(\mathbb{R}^2\). Describe the eigenvalues for each of the following types of matrices.

  1. \(A\) maps all of \(\mathbb{R}^2\) onto a line through the origin in \(\mathbb{R}^2\).
  2. \(A\) is a reflection of \(\mathbb{R}^2\) about a line through the origin
  3. \(A\) is a reflection of \(\mathbb{R}^2\) through the origin
  4. \(A\) is a horizontal shear of the form d form \[ \begin{bmatrix} 1 & a \\ 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} x + a y \\ y \end{bmatrix} \]
  5. \(A\) is the orthogonal projection onto the line \(y = 5x\).

29.2.3

Below are the eigenvalues of four different \(5 \times 5\) matrices. For each, decide if the matrix is invertible and if it is diagonalizable. Answer Yes, No or “Not enough information to determine this.”

  1. \(A\) has eigenvalues \(\lambda = -4, -3,0,1, 2\)
  2. \(B\) has eigenvalues \(\lambda = -3, -1, 1, \sqrt{2}, 8.\)
  3. \(C\) has eigenvalues \(\lambda = 1, 2, 2, 7, 8.\)
  4. \(D\) has eigenvalues \(\lambda = -1, 0, 3,3, 10\)

29.2.4

Here the diagonalization of a matrix: \[ \mathsf{A}=\left[ \begin{array}{ccc} 5 & 2 & -1 \\ 2 & 1 & 0 \\ -1 & 0 & 1 \\ \end{array} \right] = \left[ \begin{array}{ccc} -5 & 0 & 1 \\ -2 & 1 & -2 \\ 1 & 2 & 1 \\ \end{array} \right] \left[ \begin{array}{ccc} 6 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \\ \end{array} \right]\left[ \begin{array}{ccc} -\frac{1}{6} & -\frac{1}{15} & \frac{1}{30} \\ 0 & \frac{1}{5} & \frac{2}{5} \\ \frac{1}{6} & -\frac{1}{3} & \frac{1}{6} \\ \end{array} \right]. \]

  1. Is the matrix \(\mathsf{A}\) invertible?
  2. Find a nonzero vector in \(\mathrm{Nul}(\mathsf{A})\) if one exists.
  3. Find a steady-state vector \(\mathsf{v}\) such that \(\mathsf{A} \mathsf{v} = \mathsf{v}\) if one exists.
  4. Give the coordinates of \(\mathsf{v} = [1,2,3]^T\) in the eigenbasis without row reductions.
  5. Find a formula for \(\mathsf{A}^{2020} \mathsf{v}\) if \(\mathsf{v} = [1,2,3]^T\) in terms of the eigenbasis.

29.2.5

The eigensystem of matrix \(A\) is given below. It has complex eigenvalues. What angle does it rotate by? What factor does it scale by? \[ \begin{bmatrix} 3 & -5 \\ 1 & -1 \end{bmatrix}, \qquad \lambda = 1 \pm i, \qquad v = \begin{bmatrix} 2 \\ 1 \end{bmatrix} \pm \begin{bmatrix} 1 \\ 0 \end{bmatrix} i. \]

29.2.6

Using the matrix \(B = \begin{bmatrix} .97 & -.71 \\ .71 & .97 \end{bmatrix}\) and the starting vector \(\mathsf{v} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}\), I plotted the points \[\mathsf{v}, B \mathsf{v}, B^2\mathsf{v}, B^3 \mathsf{v}, \ldots.\] I saw that these points are, roughly, going around in a circle.

  1. How many multiplications by \(B\) does it take to get back around to the positive \(x\)-axis?

  2. When I come full circle, am I closer to the origin, farther from the origin, or the same distance to the origin?

29.2.7

For each matrix below, decide if it is diagonalizable. You do not need to diagonalize the matrix (though you can!), but you must give a reason for why the matrix is or is not diagonalizable.

  1. \(A = \begin{bmatrix} 0 & -4 & 2 \\ 2 & -4 & -1 \\ -6 & 4 & 7 \end{bmatrix}\) has eigenvalues \(4, -1, 0\).

  2. \(B = \begin{bmatrix} 3 & -1 & 2 \\ -1 & 3 & 2 \\ 2&2 & 0 \end{bmatrix}\) has eigenvalues \(4,4,-2\).

29.2.8

Consider the matrix with eigenvalues and eigenvectors \[ A = \begin{bmatrix} 0.7 & 0.2 \\ 0.3 & 0.8 \end{bmatrix} \qquad \begin{array}{cc} \lambda_1 = 1 & \lambda_2 = .5 \\ \mathsf{v}_1 = \begin{bmatrix} 2 \\ 3 \end{bmatrix} & \mathsf{v}_2 = \begin{bmatrix} 1 \\ -1 \end{bmatrix} \end{array} \]

  1. Diagonalize \(A\).
  2. What can you say about \(\displaystyle{\lim_{n \to \infty}} A^n\)?
  3. Give a formula for \(A^n \mathsf{x}_0\) if \(\mathsf{x}_0 = \begin{bmatrix} 25 \\ 0 \end{bmatrix}\) in terms of the eigenbasis.
  4. What is \(\displaystyle{\lim_{n \to \infty}} A^n \begin{bmatrix} 25 \\ 0 \end{bmatrix}\)?

29.2.9

The matrix \(A\) below has the given eigenvalues and eigenvectors. \[ A = \left[ \begin{array}{cc} \frac{1}{2} & \frac{1}{5} \\ -\frac{2}{5} & \frac{9}{10} \\ \end{array} \right] \qquad \begin{array}{c} \lambda = .7 \pm .2 i \\ \mathsf{v} = \begin{bmatrix} \frac{1}{2} \\ 1 \end{bmatrix} \pm \begin{bmatrix} -\frac{1}{2} \\ 0 \end{bmatrix} i \end{array}\hskip5in \]

  1. Factor \(A=PCP^{-1}\) where \(C\) is a rotation-scaling matrix. You won’t have to do this part on the quiz.
  2. What is the angle of rotation?
  3. What is the factor of dilation?

29.2.10

In a 1962 study of rainfall in Tel Aviv, it was determined that if today is a wet day, then the probability that tomorrow will be wet is 0.662 and the probability that tomorrow it will be dry is 0.338. If today is a dry day, then the probability that tomorrow is wet is 0.250 and the probability that tomorrow is dry will be 0.75. From this I computed the following: \[ A = \begin{bmatrix} 0.662 & 0.25 \\ 0.338 & 0.75\end{bmatrix}; \qquad \begin{array}{cc} \lambda_1 = 1.0 & \lambda_2 = 0.412 \\ \mathsf{v}_1 = \begin{bmatrix}-0.595 \\ -0.804 \end{bmatrix} & \quad \mathsf{v}_2 = \begin{bmatrix}-0.707\\ 0.707 \end{bmatrix} \end{array} \]

  1. If Monday is a dry day, what is the probability that Wednesday will be wet?
  2. In the long-run, what is the distribution of wet and dry days?

29.2.11

A population of female bison is split into three groups: juveniles who are less than one year old; yearlings between one and two years old; and adults who are older than two years. Each year,

 * 80% of the juveniles survive to become yearlings.
 * 90% of the yearlings survive to become adults.
 * 80% of the adults survive.
 * 40% of the adults give birth to a juvenile

Let \(\mathsf{x}_t = \begin{bmatrix} J_t \\ Y_t \\ A_t \end{bmatrix}\) be the state of the system in year \(t\).

  1. Find the Leslie matrix \(L\) such that \(\mathsf{x}_{t+1} = B \mathsf{x}_t.\).
  2. Find the eigenvalues of \(L\).
  3. The matrix \(L\) has two complex eigenvalues and one real eigenvalue. How do the complex eigenvectors manifest in the trajectory of a population?
  4. What is the long-term behavior of the herd? Will the size of the herd grow, stablilize or shrink? What will be the proportions of juveniles, yearlings and adults in the herd?

29.2.12

Let \(A\) and \(B\) be \(n \times n\) matrices. Suppose that \(v\) is an eigenvector of \(A\) with eigenvalue \(\lambda\) and \(v\) is an eigenvector of \(B\) with eigenvalue \(\mu\) such that \(\lambda \not= \mu\). Is \(v\) an eigenvector of either of the matrices below? If so give its eigenvalue.

  1. \(A + B\)
  2. \(AB\)

29.2.13

Suppose that \(A\) is invertible.

  1. Show that if \(v\) is an eigenvector of \(A\) with eigenvalue \(\lambda\), then \(v\) is an eigenvector of \(A^{-1}\) with eigenvalue \(1/\lambda\).

  2. If \(A\) is diagonalizable with diagonalization \(A = P D P^{-1}\), then show that \(A^{-1}\) is diagonalizable and find its diagonalization from that of \(A\).

29.2.14

Suppose that \(A\) is an \(n \times n\) matrix with eigenvector \(\vec w\) of eigenvalue 5 and eigenvector \(\vec v\) of eigenvalue -3.

  1. Is \(\vec v + \vec w\) an eigenvector of \(A\), and if so, what is its eigenvalue?

  2. Is \(2021 \vec v\) an eigenvector of \(A\), and if so what is its eigenvalue?

  3. Is \(\vec w\) an eigenvector of \(A^2\), and if so what is its eigenvalue?

  4. Is \(\vec v\) an eigenvector of \(A - 2021 I_n\) and if so, what is its eigenvalue?

29.2.15

Let \(\mathsf{v} = \begin{bmatrix}1 \\ -1 \\ 1 \end{bmatrix}\) and \(\mathsf{w}= \begin{bmatrix}5 \\ 2 \\ 3 \end{bmatrix}\).

  1. Find \(\| \mathsf{v} \|\) and \(\| \mathsf{w} \|\).

  2. Find the distance between \(\mathsf{v}\) and \(\mathsf{w}\).

  3. Find the cosine of the angle between \(\mathsf{v}\) and \(\mathsf{w}\).

  4. Find \(\mbox{proj}_{\mathsf{v}} \mathsf{w}\).

  5. Let \(W=\mbox{span} (\mathsf{v}, \mathsf{w})\). Use the residual from the previous projection to create an orthonormal basis \(\mathsf{u}_1, \mathsf{u}_2\) for \(W\) such that \(\mathsf{u}_1\) is a vector in the same direction as \(\mathsf{v}\).

29.2.16

Let \(\mathsf{u} \neq 0\) be a vector in \(\mathbb{R}^n\). Define the function \(T: \mathbb{R}^n \rightarrow \mathbb{R}^n\) by \(T(\mathsf{x}) = \mbox{proj}_{\mathsf{u}} \mathsf{x}\). Recall that the kernel of \(T\) is the subspace \(\mbox{ker}(T) = \{ \mathsf{x} \in \mathbb{R}^n \mid T(x) = \mathbf{0} \}\). Describe \(\mbox{ker}(T)\) as explicitly as you can.

29.2.17

The vectors \(\mathsf{u}_1, \mathsf{u}_2\) form an orthonormal basis of a subspace \(W\) of \(\mathbb{R}^4\). Find the projection of \(\mathsf{v}\) onto \(W\) and determine how close \(\mathsf{v}\) is to \(W\). \[ \mathsf{u}_1 = \frac{1}{2}\begin{bmatrix} 1\\ -1\\ -1\\ 1 \end{bmatrix}, \quad \mathsf{u}_2 = \frac{1}{2}\begin{bmatrix} 1\\ -1\\ 1\\ -1 \end{bmatrix}, \quad \mathsf{v} = \begin{bmatrix} 2\\ 2\\ 4\\ 2 \end{bmatrix} \]

29.2.18

Consider vectors \(\mathsf{v}_1 = \begin{bmatrix} 1 \\ 1 \\-1 \end{bmatrix}\) and \(\mathsf{v}_2= \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}\) in \(\mathbb{R}^3\). Let \(W=\mbox{span}(\mathsf{v}_1, \mathsf{v}_2)\).

  1. Show that \(\mathsf{v}_1\) and \(\mathsf{v}_2\) are orthogonal.

  2. Find a basis for \(W^{\perp}\).

  3. Use orthogonal projections to find the representation of \(\mathsf{y} = \begin{bmatrix} 8 \\ 0 \\ 2 \end{bmatrix}\) as \(\mathsf{y} = \hat{\mathsf{y}} + \mathsf{z}\) where \(\hat{\mathsf{y}} \in W\) and \(\mathsf{z} \in W^{\perp}\).

29.2.19

Let \(W\) be the span of the vectors \[ \begin{bmatrix} 1 \\ -2 \\ 1 \\ 0 \\1 \end{bmatrix}, \quad \begin{bmatrix} -1 \\ 3 \\ -1 \\ 1 \\ -1 \end{bmatrix}, \quad \begin{bmatrix} 0 \\ 0 \\ 1 \\ 3 \\1 \end{bmatrix}, \quad \begin{bmatrix} 0 \\ 2 \\ 0 \\ 0 \\4 \end{bmatrix} \]

  1. Find a basis for \(W\). What is the dimension of this subspace?
  2. Find a basis for \(W^{\perp}\)

29.2.20

Consider the system \(A \mathsf{x} = \mathsf{b}\) given by \[ \begin{bmatrix} 1 & 1 & 1 \\ 1 & 2 & -1 \\ 1 & 1 & -1 \\ 1 & 2 & 1 \end{bmatrix} \begin{bmatrix} x_1\\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 4\\ 1 \\ -2 \\ -1 \end{bmatrix}. \]

  1. Show that this system is inconsistent.
  2. Find the projected value \(\hat{\mathsf{b}}\), and the residual \(\mathsf{z}\).
  3. How close is your approximate solution to the desired target vector?

29.2.21

Here is an inconsistent system of equations: \[ \begin{bmatrix} 1 & 2 \\ 1 & 2 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 6\\ 4 \\ -4 \end{bmatrix} \]

  1. State the normal equations for this problem (be sure to do all of the necessary matrix multiplications).

  2. Find the least squares solution to the problem.

  3. How close is your approximate solution to the desired target vector?

29.2.22 Watch this!

The answer to at least one question on Quiz 3 is contained in this video.

29.3 Solutions to Practice Problems

29.3.1

There are three eigenvalues: 1, 2, and \(-2\). We find an eigenvector for each of them. * Eigenvalue \(\lambda = 1\) \[ A - I = \left[ \begin{array}{rrr} 1 & -1 & 0 \\ 0 & 0 & 0 \\ -2 & 5 & -3 \\ \end{array} \right] \sim \left[ \begin{array}{rrr} 1 & -1 & 0 \\ 0 & 3 & -3 \\ 0 & 0 & 0 \\ \end{array} \right] \sim \left[ \begin{array}{rrr} 1 & -1 & 0 \\ 0 & 1 & -1 \\ 0 & 0 & 0 \\ \end{array} \right] \sim \left[ \begin{array}{rrr} 1 & 0 & -1 \\ 0 & 1 & -1 \\ 0 & 0 & 0 \\ \end{array} \right] \] So one eigenvector is \([1,1,1]^{\top}\)

  • Eigenvalue \(\lambda = 2\) \[ A - 2I = \left[ \begin{array}{rrr} 0 & -1 & 0 \\ 0 & -1 & 0 \\ -2 & 5 & -4 \\ \end{array} \right] \sim \left[ \begin{array}{rrr} -2 & 5 & -4 \\ 0 & -1 & 0 \\ 0 & 0 & 0 \\ \end{array} \right] \sim \left[ \begin{array}{rrr} -2 & 0 & -4 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \\ \end{array} \right] \sim \left[ \begin{array}{rrr} 1 & 0 & 2 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \\ \end{array} \right] \] So one eigenvector is \([-2,0,1]^{\top}\)

  • Eigenvalue \(\lambda = -2\) \[ A - 2I = \left[ \begin{array}{rrr} 4 & -1 & 0 \\ 0 & 3 & 0 \\ -2 & 5 & 0 \\ \end{array} \right] \sim \left[ \begin{array}{rrr} 4 & 0 & 0 \\ 0 & 1 & 0 \\ -2 & 0 & 0 \\ \end{array} \right] \sim \left[ \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \\ \end{array} \right] \] So one eigenvector is \([0,0,1]^{\top}\)

29.3.2

In this problem, we are to think about the geometry of a 2D transformation, and see if we can find any vectors which get re-scaled by the transformation. The direction of these vectors cannot change (other than to flip to the opposite direction).

  1. This maps all of \(\mathbb{R}^2\) to a line. Therefore it is not one-to-one, nor onto, and so it is not invertible. This means that \(\lambda = 0\) is an eigenvalue Any vector that is already on the line must stay on the line, so it is an eigenvector, but we don’t know its eigenvalue. Thus, the eigenvalues are \(\lambda_1 = 0\) and \(\lambda_2\) we don’t know.

  2. There are two kinds of eigenvectors. Those vectors on the line are fixed, so they are eigenvectors of eigenvalue 1. Vectors that are perpendicular to the line get sent to their negatives, so they are eigenvectors of eigenvalue \(-1\). Thus, the eigenvalues are \(\lambda_1 = 1\) and \(\lambda_2=-1\).

  • Perhaps this one is easiest to understand by looking at a particular example. Let’s look at a reflection across the \(y\)-axis. This means that \(A [ x, y]^{\top} = [-x, y]^{\top}\). In particular, we have \(A [ 1, 0]^{\top} = [ -x, 0]^{\top} = - [ x, 0]^{\top}\). So \(-1\) is an eigenvalue. Meanwhile, we also have \(A [ 0, 1]^{\top} = [ 0, 1]^{\top}\) So 1 is an eigenvalue.
  1. In this transformation, every vector gets sent to its negative. \[ T\left( \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \right) = \begin{bmatrix} -x_1 \\ -x_2 \end{bmatrix} = \begin{bmatrix} -1 & 0 \\ 0 & -1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \] This means that every vector is an eigenvector of eigenvalue \(-1\). The eigenvalues are \(\lambda_1 = \lambda_2=-1\).

  2. A horizontal shear (we did not talk about these very much) has a matrix of the form \[ \begin{bmatrix} 1 & a \\ 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} x + a y \\ y \end{bmatrix} \] It fixes the \(x\)-axis since \((x,0)^\top\) maps to \((x,0)^\top\), but no other directions are fixed. You can see by the fact that the matrix is upper triangluar that the eigenvalues are on the diagonal and are \(\lambda_1 = \lambda_2 = 1\). Note: if you calculate, you find that the geometric multiplicity of \(\lambda = 1\) is 1 (only the \(x\)-axis), and this matrix is not diagonalizable. The only eigenspace is the \(x\)-axis.

29.3.3

  1. \(A\) is not invertible because \(0\) is an eigenvalue. \(A\) is diagonalizable because it have 5 distinct eigenvalues.
  2. \(B\) is invertible because \(0\) is not an eigenvalue. \(B\) is diagonalizable because it have 5 distinct eigenvalues.
  3. \(C\) is invertible because \(0\) is not an eigenvalue. We cannot tell whether \(C\) is diagonalizable without more information. The eigenvalue \(\lambda=2\) has algebraic multiplicity 2. We need to know whether the geometric multiplicity is 1 or 2.
  4. \(D\) is not invertible because \(0\) is an eigenvalue. We cannot tell whether \(D\) is diagonalizable without more information. The eigenvalue \(\lambda=3\) has algebraic multiplicity 2. We need to know whether the geometric multiplicity is 1 or 2.

29.3.4

  1. No, \(A\) is not invertible because \(0\) is an eigenvalue.
  2. \(\mathsf{v} = [1, -2, 1]^{\top}\) is an eigenvector for \(\lambda=0\). Therefore \(\mathsf{v} \in \mbox{Nul}(A)\).
  3. The vector \(\mathsf{v} = [0,1,2]^{\top}\) is an eigenvector for \(\lambda=1\). So this is a steady-state vector. (However, the dynamical system will not converge to this steady state because \(\lambda=6\) is the dominant eigenvalue.)
  4. When \(A=P D P^{-1}\), we can find the coordinates of a vector with respect to the eigenbasis via multiplication by \(P^{-1}\).
Pinv =cbind(c(-1/6,0,1/6),c(-1/15,1/5,-1/3),c(1/30,2/5,1/6))
v = c(1,2,3)

Pinv %*% v
##      [,1]
## [1,] -0.2
## [2,]  1.6
## [3,]  0.0

So \([ \mathsf{v}]_{\mathcal{B}} = [-1/5, 8/5, 0]^{\top}\).

  1. \(-\frac{1}{5} \cdot 6^{2020} \cdot \begin{bmatrix} -5 \\ -2 \\ 1 \end{bmatrix} + \frac{8}{5} \cdot \begin{bmatrix} 0 \\ 1 \\ 2 \end{bmatrix}\)

29.3.5

This system scales by \(\sqrt{1+1} = \sqrt{2}\) and it rotates by \(\tan^{-1} (1/1) = \pi/4\).

29.3.6

We have \[ \begin{bmatrix} a & -b \\ b & a \end{bmatrix} = \begin{bmatrix} .97 & -.71\\ .71 & .97 \end{bmatrix} \]

Let’s turn to RStudio

a = .97
b = .71

scale = sqrt(a^2+b^2)
angle = atan (b/a)

scale
## [1] 1.202082
angle
## [1] 0.6318544
2 * pi / angle
## [1] 9.94404
  1. It takes 10 iterations to rotate past the \(x\)-axis.
  2. We are further from the origin because \(| \lambda| \approx 1.2 > 1\).

29.3.7

  1. The matrix \(A\) is diagonalizable because it has 3 distinct eigenvalues
  2. We must see whether \(\lambda=4\) has geometric multiplicty 2 (to match its algebraic multiplicity).
rref( cbind(c(-1,-1,2), c(-1,-1,2), c(2,2,-4)))
##      [,1] [,2] [,3]
## [1,]    1    1   -2
## [2,]    0    0    0
## [3,]    0    0    0

We see that \(B - 4I\) has two free columns, so \(\dim ( \mbox{Nul}(B-4I))=2\). This means that \(\lambda=4\) has geometric multiplicity 2. Therefore \(B\) is diagonalizable.

29.3.8

  1. We set \(P = \begin{bmatrix} 2 & 1 \\ 3 & -1 \end{bmatrix}\). So \[ P^{-1} = - \frac{1}{5} \begin{bmatrix} -1 & -1 \\ -3 & 2 \end{bmatrix} = \begin{bmatrix} 0.2 & 0.2 \\ 0.6 & -0.4 \end{bmatrix} \] Or we can find this inverse using RStudio.
A = cbind(c(2,3),c(1,-1))
solve(A)
##      [,1] [,2]
## [1,]  0.2  0.2
## [2,]  0.6 -0.4

Therefore \[ A = \begin{bmatrix} 0.7 & 0.2 \\ 0.3 & 0.8 \end{bmatrix} = \begin{bmatrix} 2 & 1 \\ 3 & -1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 0.5 \end{bmatrix} \begin{bmatrix} 0.2 & 0.2 \\ 0.6 & -0.4 \end{bmatrix} \]

  1. We compute this as follows:

\[ A^n = \begin{bmatrix} 2 & 1 \\ 3 & -1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 0.5 \end{bmatrix}^n \begin{bmatrix} 0.2 & 0.2 \\ 0.6 & -0.4 \end{bmatrix}= \begin{bmatrix} 2 & 1 \\ 3 & -1 \end{bmatrix} \begin{bmatrix} 1^n & 0 \\ 0 & 0.5^n \end{bmatrix} \begin{bmatrix} 0.2 & 0.2 \\ 0.6 & -0.4 \end{bmatrix} \] so \[ \lim_{n\to \infty} A^n = \begin{bmatrix} 2 & 1 \\ 3 & -1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 0.2 & 0.2 \\ 0.6 & -0.4 \end{bmatrix} =\begin{bmatrix} 0.4 & 0.4 \\ 0.6 & 0.6 \end{bmatrix} \]

  1. We need to find the coefficients for \(x_0 = [25, 0]^{\top}\).
P = cbind(c(2,3), c(1,-1))
v = c(25,0)
solve(P,v)
## [1]  5 15

So the formula is \[ 5 \begin{bmatrix} 2 \\ 3 \end{bmatrix} + 15 \left( \frac{1}{2} \right)^n \begin{bmatrix} 1 \\ -1 \end{bmatrix} \]

  1. This converges to \(5 \begin{bmatrix} 2 \\ 3 \end{bmatrix}\).

29.3.9

  1. We have \[ A = \left[ \begin{array}{cc} \frac{1}{2} & \frac{1}{5} \\ -\frac{2}{5} & \frac{9}{10} \\ \end{array} \right] = \begin{bmatrix} -1/2 & 1/2 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 0.7 & -0.2 \\ 0.2 & 0.7 \end{bmatrix} \begin{bmatrix} -2 & 1 \\ 0 & 1 \end{bmatrix} \] Here are some R calculations to check the answer for (a) and to find the values for (b) and (c).
A = cbind(c(1/2,-2/5), c(1/5,9/10))
A
##      [,1] [,2]
## [1,]  0.5  0.2
## [2,] -0.4  0.9
eigen(A)
## eigen() decomposition
## $values
## [1] 0.7+0.2i 0.7-0.2i
## 
## $vectors
##                      [,1]                 [,2]
## [1,] 0.4082483-0.4082483i 0.4082483+0.4082483i
## [2,] 0.8164966+0.0000000i 0.8164966+0.0000000i
P = cbind(c(-1/2,0),c(1/2,1))
C = cbind(c(.7,.2),c(-.2,.7))
Pinv = solve(P)

Pinv
##      [,1] [,2]
## [1,]   -2    1
## [2,]    0    1
P %*% C %*% Pinv
##      [,1] [,2]
## [1,]  0.5  0.2
## [2,] -0.4  0.9
atan(.2/.7)
## [1] 0.2782997
sqrt(.7^2 + .2^2)
## [1] 0.728011
  1. The angle of rotation is \(\tan^{-1} (.2/.7) = 0.278\) radians
  2. The dilation factor is \(\sqrt{0.49 + 0.04} = \sqrt{0.53} = 0.728\).

29.3.10

Let’s use RStudio.

A = cbind(c(0.662, 0.338),c(0.25,  0.75))
A %*% A %*% c(0,1)
##       [,1]
## [1,] 0.353
## [2,] 0.647
v1 = c(-0.595, -0.804 )
v1/sum(v1)
## [1] 0.4253038 0.5746962
  1. If Monday is dry, then the probability of a wet Wednesday is \(0.353\). The easiest way to calculate this \(A^2 \begin{bmatrix} 1 \\ 0 \end{bmatrix}.\)
  2. In the long run, \(42.5\%\) of days are wet and \(57.5\%\) of days are dry.

29.3.11

  1. Here is the Leslie matrix, as well as some eigensystem computations.
L = cbind(c(0,.8,0),c(0,0,.9),c(.4,0,.8))
L
##      [,1] [,2] [,3]
## [1,]  0.0  0.0  0.4
## [2,]  0.8  0.0  0.0
## [3,]  0.0  0.9  0.8
(vals = eigen(L)$values)
## [1]  1.0575217+0.0000000i -0.1287609+0.5057227i -0.1287609-0.5057227i
Mod(vals)
## [1] 1.0575217 0.5218571 0.5218571
vecs = eigen(L)$vectors
v = vecs[,1]
Re(v/sum(v)) # get it to sum to 1 AND remove the 0 imaginary part
## [1] 0.2272578 0.1719172 0.6008250
  1. The eigenvalues are \(1.058, -0.129 \pm 0.506 i\). The complex eigenvalues have length 0.52, so they shrink away pretty quickly.
  2. If we start outside of the span of the dominant eigenvalue, then the trajectory oscillate slightly until it settles into the direction of the dominant eigevector, with an overall growth trend of \(1.058\), or \(5.8\%\).
  3. The size of the herd grows. The proportions are \([0.227, 0.172, 0.601]\).

29.3.12

  1. \((A + B) v = A v + B v = \lambda v + \mu v = (\lambda + \mu) v\), so yes, \(v\) is an eigenvector of \(A+B\) of eigenvalue \(\lambda + \mu\).

  2. \(A B v = A (B v) = A (\mu v) = \mu (A v) = \mu \lambda v\), so yes, \(v\) is an eigenvector of \(AB\) of eigenvalue \(\lambda\mu\).

29.3.13

  1. We are given \(A v = \lambda v\). Thus,

\[ \begin{array}{cccl} A v & = & \lambda v & \text{given} \\ A^{-1} A v & = & \lambda A^{-1} v & \text{multiply on the left by $A^{-1}$} \\ v & = & \lambda A^{-1} v \\ \frac{1}{\lambda} v & = & A^{-1} v \\ \end{array} \] This shows that \(A^{-1} v = \frac{1}{\lambda} v\) so \(v\) is an eigenvector of \(A^{-1}\) with eigenvalue \(\frac{1}{\lambda}\)

  1. (method 1) If \(A\) is diagonal, then there is a basis \(\{v_1, v_2, \ldots, v_n\}\) of eigenvectors of \(A\) with eigenvalues \(\lambda_1, \lambda_2, \ldots, \lambda_n\). By the previous part, \(\{v_1, v_2, \ldots, v_n\}\) are eigenvectors of \(A^{-1}\) with eigenvalues \(1/\lambda_1, 1/\lambda_2, \ldots, 1/\lambda_n\). Thus \(A^{-1}\) has the same eigenbasis, and the diagonalization of \(A^{-1}\) is \[ A^{-1} = \underbrace{ \begin{bmatrix} \vert &\vert &&\vert \\ v_1 & v_2 & \cdots & v_n \\ \vert &\vert &&\vert \\ \end{bmatrix} }_P \begin{bmatrix} 1/\lambda_1 & & & \\ & 1/\lambda_2 & & \\ & & \ddots \\ & & & 1/\lambda_n \\ \end{bmatrix} P^{-1} \] (method 2) If \(A = P D P^{-1}\) then by the fact that the order reverses when computing inverses (the shoes-and-socks property), we have \(A^{-1} = (P D P^{-1})^{-1} = (P^{-1})^{-1} D^{-1} P^{-1} = P D^{-1} P^{-1}.\) Furthermore \(D^{-1}\) is a diagonal matrix such that \[ \text{if} \qquad D = \begin{bmatrix} \lambda_1 & & & \\ & \lambda_2 & & \\ & & \ddots & \\ & & & \lambda_n \\ \end{bmatrix} \qquad\text{then}\qquad D^{-1} = \begin{bmatrix} 1/\lambda_1 & & & \\ & 1/\lambda_2 & & \\ & & & \ddots & \\ & & & & 1/\lambda_n \\ \end{bmatrix} \] Note that \(A\) is invertible, 0 is not an eigenvalue, so each \(1/\lambda_i\) does not cause division by 0.

29.3.14

We are given that \(A w = 5 w\) and \(A v = -3 v\).

  1. \(A (v + w) = A v + A w = -3 v + 5 w \not = \lambda(v + w)\) for any \(\lambda\), so \(v + w\) is not an eigenvector of \(A\). Note: it would be if they had the same eigenvalue.

  2. \(A (2021 v) = 2021 A v = 2021 (-3) v = (-3) (2021 v)\) so \(2021 v\) is an eigenvector also of eigenvalue \(-3\).

  3. \(A^2 w = A (A w) = A (5 w) = 5 (A w) = 5 (5 w) = 25 w\), so \(w\) is an eigenvector of \(A^2\) of eigenvalue 25.

  4. \((A - 20201I_n)v = A v - 2021 I_n v = -3 v - 2021 v = -2024 v\), so \(v\) is an eigenvector of \((A - 20201I_n)\) of eigenvalue \(-2024\).

29.4

  1. \[\begin{align} \| \mathsf{v} \| &= \sqrt{ \mathsf{v} \cdot \mathsf{v}} = \sqrt{1+1+1} = \sqrt{3} \\ \| \mathsf{w} \| &= \sqrt{ \mathsf{w} \cdot \mathsf{vw}} = \sqrt{25+4+9} = \sqrt{38} \\ \end{align}\]

  2. We have \(\mathsf{v} - \mathsf{w} = \begin{bmatrix} -4 \\ -3 \\ -2 \end{bmatrix}\) and so \[ \| \mathsf{v} - \mathsf{w}\| = \sqrt{16+9+4} = \sqrt{29} \]

  3. \[ \cos \theta = \frac{\mathsf{v} \cdot \mathsf{w}}{\| \mathsf{v} \| \, \|\mathsf{w} \| } = \frac{5-2+3}{\sqrt{3} \, \sqrt{38} } = \frac{2\sqrt{3}}{\sqrt{38} } \]

  4. \[ \hat{\mathsf{w}} = \mbox{proj}_{\mathsf{v}} \mathsf{w} = \frac{\mathsf{v} \cdot \mathsf{w}}{ \mathsf{v} \cdot \mathsf{v} } \, \mathsf{v} = \frac{5-2+3}{1+1+1} \mathsf{v} = 2 \mathsf{v} = \begin{bmatrix} 2 \\ -2 \\ 2 \end{bmatrix} \]

  5. Using \(\hat{\mathsf{w}}\) from the previous problem, we know that \[ \mathsf{z} = \mathsf{w} - \hat{\mathsf{w}} = \begin{bmatrix} 5 \\ 2 \\ 3 \end{bmatrix} - \begin{bmatrix} 2 \\ -2 \\ 2 \end{bmatrix} = \begin{bmatrix} 3 \\ 4 \\ 1 \end{bmatrix} \] is orthogonal to \(\mathsf{v}\).So an orthonormal basis is \[ \frac{1}{\sqrt{3}} \begin{bmatrix} 1 \\ -1 \\ 1 \end{bmatrix} \quad \mbox{and} \quad \frac{1}{\sqrt{26}} \begin{bmatrix} 3 \\ 4 \\ 1 \end{bmatrix} \]

29.4.1

Here are a few ways to describe \(\mbox{ker}(T)\).

  1. \(\mbox{ker}(T) = \{ \mathsf{x} \in \mathbb{R}^n \mid \mathsf{x} \cdot \mathsf{u} = 0 \}\).

  2. \(\mbox{ker}(T)\) is the set of vectors that are orthogonal to \(\mathsf{u}\).

  3. Let \(A\) be the \(1 \times n\) matrix \(\mathsf{u}^{\top}\). Then \(\mbox{ker}(T)= \mbox{Nul}(A)\).

29.4.2

We have \(\mathsf{u}_1 \cdot \mathsf{v} = 2-2-4+2=-2\) and \(\mathsf{u}_1 \cdot \mathsf{v} = 2-2+4-2=2\) so \[ \hat{\mathsf{v}} = \mbox{proj}_W \mathsf{v} = - \mathsf{u}_1 + \mathsf{u}_2 = -\frac{1}{2} \begin{bmatrix} 1 \\ -1 \\ -1 \\ 1 \end{bmatrix} + \frac{1}{2} \begin{bmatrix} 1 \\ -1 \\ 1 \\ -1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 1 \\ -1 \end{bmatrix} \] with residual vector \[ \mathsf{z} = \mathsf{v} - \hat{\mathsf{v}} = \begin{bmatrix} 2 \\ 2 \\ 4 \\ 2 \end{bmatrix} - \begin{bmatrix} 0 \\ 0 \\ 1 \\ -1 \end{bmatrix} = \begin{bmatrix} 2 \\ 2 \\ 3 \\ 3 \end{bmatrix} \] and the distance is \(\| \mathsf{z} \| = \sqrt{4+4+9+9} = \sqrt{26}\).

29.4.3

  1. \(\mathsf{v}_1 \cdot \mathsf{v}_2 = 1 +2 - 3 =0\).

  2. We must find \(\mbox{Nul}(A)\) where \(A = \begin{bmatrix} \mathsf{v}_1^{\top} \\ \mathsf{v}_2^{\top}\end{bmatrix}\).

\[ \begin{bmatrix} 1 & 1 & -1 \\ 1 & 2 & 3 \end{bmatrix} \longrightarrow \begin{bmatrix} 1 & 1 & -1 \\ 0 & 1 & 4 \end{bmatrix} \longrightarrow \begin{bmatrix} 1 & 0 & -5 \\ 0 & 1 & 4 \end{bmatrix} \] so the vector \(\begin{bmatrix} 5 \\ -4 \\ 1 \end{bmatrix}\) is a basis for \(W^{\perp}\)

  1. We have \[\begin{align} \hat{\mathsf{y}} &= \frac{\mathsf{y} \cdot \mathsf{v_1}}{\mathsf{v_1} \cdot \mathsf{v_1}} \, \mathsf{v_1} + \frac{\mathsf{y} \cdot \mathsf{v_2}}{\mathsf{v_2} \cdot \mathsf{v_2}} \, \mathsf{v_2} = \frac{8-2}{1+1+1} \mathsf{v_1} + \frac{8+6}{1+4+9} \mathsf{v_2} \\ &= 2\mathsf{v_1} +\mathsf{v_2} = \begin{bmatrix} 2 \\ 2 \\ -2 \end{bmatrix} + \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} = \begin{bmatrix} 3 \\ 4 \\ 1 \end{bmatrix} \end{align}\] and so \[ \mathsf{z} = \mathsf{y} - \hat{\mathsf{y}} = \begin{bmatrix} 8 \\ 0 \\ 2 \end{bmatrix} - \begin{bmatrix} 3 \\ 4 \\ 1 \end{bmatrix} = \begin{bmatrix} 5 \\ -4 \\ 1 \end{bmatrix}. \]

29.4.4

a .We will answer this one using RStudio.

A = cbind(c(1,-2,1,0,1), c(-1,3,-1,1,-1), c(0,0,1,3,1), c(0,2,0,0,4))
rref(A)
##      [,1] [,2] [,3] [,4]
## [1,]    1    0    0    0
## [2,]    0    1    0    0
## [3,]    0    0    1    0
## [4,]    0    0    0    1
## [5,]    0    0    0    0

So we need all four vectors to span the column space.

  1. We obtain a basis for \(W^{\perp}\) by finding \(\mbox{Nul(A^{\top})}\) So let’s row reduce \(A^{\top}\)
rref(t(A))
##      [,1] [,2] [,3] [,4] [,5]
## [1,]    1    0    0    0   -2
## [2,]    0    1    0    0    2
## [3,]    0    0    1    0    7
## [4,]    0    0    0    1   -2

The vector \(\begin{bmatrix} 2 \\ -2 \\ -7 \\ 2 \\ 1\end{bmatrix}\) spans \(W^{\perp}\)

29.4.5

A = cbind(c(1,1,1,1), c(1,2,1,2),c(1,-1,-1,1))
b  = c(4,1,-2,-1)
rref(cbind(A,b))
##            b
## [1,] 1 0 0 0
## [2,] 0 1 0 0
## [3,] 0 0 1 0
## [4,] 0 0 0 1

There is a pivot in the last column of this augmented matrix, so this system is inconsistent.

Here is the least squares calculation.

#solve the normal equation
(xhat = solve(t(A) %*% A, t(A) %*% b))
##      [,1]
## [1,]    2
## [2,]   -1
## [3,]    1
# find the projection
(bhat = A %*% xhat)
##               [,1]
## [1,]  2.000000e+00
## [2,] -1.000000e+00
## [3,]  6.661338e-16
## [4,]  1.000000e+00
# find the residual vector
(z = b - bhat)
##      [,1]
## [1,]    2
## [2,]    2
## [3,]   -2
## [4,]   -2
# check that z is orthogonal to Col(A)
t(A) %*% z
##               [,1]
## [1,] -8.881784e-16
## [2,]  0.000000e+00
## [3,]  0.000000e+00
# measure the distance between bhat and b
sqrt( t(z) %*% z)
##      [,1]
## [1,]    4

The projection is \(\hat{\mathsf{b}} = [2,-1,0,1]^{\top}\). The residual is \(\mathsf{z} = [2,2,-2,-2]^{\top}\)

  1. The distance of between \(\mathsf{b}\) and \(\hat{\mathsf{b}}\) is \[ \| = \| \mathsf{z} \| = \sqrt{4+4+4+4} = \sqrt{16} = 4. \]