Section 28 Quiz 2 Review
28.1 Overview
Our second quiz covers the following sections:
- Linear Transformations
- 1.9: One-to-one and onto
- 2.1: Matrix Multiplication
- 2.2: Matrix Inverses
- 2.3: The Invertible Matrix Theorem
- Subspaces
- 4.1: Subspaces of \(\mathbb{R}^n\)
- 4.2: Null Space and Column Space
- 4.3: Bases
- 4.4: Coordinates
This corresponds to Problem Sets 3 and 4.
The best way to study is to do practice problems. The Quiz will have calculation problems (like Edfinity) and more conceptual problems (like the problem sets). Here are some ways to practice:
- Make sure that you have mastered the Vocabulary, Skills and Concepts listed below.
- Look over the Edfinity homework assingments
- Do practice problems from the Edfinity Practice assignments. These allow you to “Practice Similar” by generating new variations of the same problem.
- Redo the Jamboard problems
- Try to resolve the Problem Sets and compare your answers to the solutions.
- Do the practice problems below. Compare your answers to the solutions.
28.1.1 Vocabulary and Concepts
You should understand these concepts and be able to read and use these terms correctly:
- all of the Important Definitions found here.
- matrix multiplication
- matrix inverses
- one-to-one linear transformation
- onto linear transformation
- the Invertible Matrix Theorem
- subspaces
- null space and column space of a matrix
- kernel and image of a linear transformation
- basis (span and linearly independent)
- coordinate vector with respect to a basis \(\mathcal{B}\)
- change-of-coordinates matrix
- dimension
28.1.2 Skills
You should be able to perform these linear algebra tasks.
- solve matrix algebra equations
- find a matrix inverse
- show that a subset is a subspace or demonstrate that it is not a subspace
- describe the null space and the column space
- determine if a vector is in a null space or column space
- find a basis of a subspace
- answer questions about the connections between all these ideas
- write short proofs of basic statements using the Important Definitions
28.2 Practice Problems
28.2.1
Here are the row reductions of 4 matrices into reduced row echelon form. \[ \begin{array}{ll} A \longrightarrow \begin{bmatrix} 1 & 0 & 5 & -3 & 0\\ 0 & 1 & -2 & 8 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} \qquad & B \longrightarrow \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{bmatrix} \\ \\ C \longrightarrow \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} & D \longrightarrow \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \end{bmatrix} \end{array} \]
In each case, if \(T_M\) is the linear transformation given by the matrix product \(T_M(x) = M x\), where \(M\) is the given matrix, then \(T_M: \mathbb{R}^n \to \mathbb{R}^m\) is a transformation from domain \(\mathbb{R}^n\) to codomain (aka target) \(\mathbb{R}^m\).
Determine the appropriate values for \(n\) and \(m\), and decide whether \(T_M\) is one-to-one and/or onto. Submit your answers in table form, as shown below. \[ \begin{array} {|c|c|c|c|c|} \hline \text{transformation} & n & m & \text{one-to-one?} & \text{onto?} \\ \hline T_A &\phantom{\Big\vert XX}&\phantom{\Big\vert XX}&& \\ \hline T_B &\phantom{\Big\vert XX}&&& \\ \hline T_C &\phantom{\Big\vert XX}&&& \\ \hline T_D &\phantom{\Big\vert XX}&&& \\ \hline \end{array} \hskip5in \]
28.2.2
Find the inverse of the matrix \[ \left[ \begin{array}{rrr} 1 & -2 & 2 \\ 1 & 0 & 0 \\ 2 &-4 & 5 \end{array} \right] \]
28.2.3
Suppose that a linear transformation \(T: \mathbb{R}^n \rightarrow \mathbb{R}^n\) has the property that \(T(\mathsf{u}) = T(\mathsf{v})\) for some pair of distinct vectors \(\mathsf{u}, \mathsf{v} \in \mathbb{R}^n\). Can \(T\) map \(\mathbb{R}^n\) onto \(\mathbb{R}^n\)? Why or why not?
28.2.4
Suppose that \(\mathsf{v}_1, \mathsf{v}_2, \mathsf{v}_3, \mathsf{v}_4\) are four vectors in \(\mathbb{R}^4\) and that there is a vector \(\mathsf{v} \in \mathbf{R}^4\) such that \(\mathsf{v}\) can be expressed in two different ways as linear combinations of these vectors: \[ \begin{array}{rrrrrrrrr} \mathsf{v} &=& 2 \mathsf{v}_1 &+& 7 \mathsf{v}_2 &+& 5 \mathsf{v}_3 &+& (-5) \mathsf{v}_4 \end{array} \] and \[ \begin{array}{rrrrrrrrr} \mathsf{v} &=& 3 \mathsf{v}_1 &+& 5 \mathsf{v}_2 &+& (-2) \mathsf{v}_3 &+& \mathsf{v}_4. \end{array} \] Use this information to show that \(\{\mathsf{v}_1,\mathsf{v}_2, \mathsf{v}_3, \mathsf{v_4}\}\) is a linearly dependent set by finding a dependence relation among these vectors.
28.2.5
Let \(U\) and \(W\) be subspaces of a vector space \(\mathbb{R}^n\). Prove or disprove the following statements. Prove them by showing that the conditions are being a subspace are satisfied. Disprove them with a specific counter example.
\(U \cap W = \{ \mathsf{v} \in \mathbb{R}^n \mid \mathsf{v} \in U \mbox{ and } \mathsf{v} \in W \}\) is a subspace
\(U \cup W = \{ \mathsf{v} \in \mathbb{R}^n \mid \mathsf{v} \in U \mbox{ or } \mathsf{v} \in W \}\) is a subspace
\(U+W = \{\mathsf{u} + \mathsf{w} \mid \mathsf{u} \in U \mbox{ and } \mathsf{w} \in W \}\) is a subspace
28.2.6
Let \(T: \mathbb{R}^n \rightarrow \mathbb{R}^m\) be a linear transformation.
Suppose that \(T: \mathbb{R}^n \to \mathbb{R}^m\) is one-to-one. Prove that if \(\mathsf{v}_1, \mathsf{v}_2, \mathsf{v}_3\) are linearly independent, then \(T(\mathsf{v}_1), T(\mathsf{v}_2), T(\mathsf{v}_3)\) are linearly independent.
Suppose that \(T: \mathbb{R}^n \to \mathbb{R}^m\) is onto. Prove that if \(\mathsf{v}_1, \mathsf{v}_2, \mathsf{v}_3\) span \(\mathbb{R}^n\) then \(T(\mathsf{v}_1), T(\mathsf{v}_2), T(\mathsf{v}_3)\) span \(\mathbb{R}^m\).
28.2.7
I have performed some row operations below for you on a matrix \(A\). Find a basis for the column space and the null space of \(A\). \[ A= \left[ \begin{matrix} 1& 2& 0& 2& 0& -1 \\ 1& 2& 1& 1& 0& -2 \\ 2& 4& -2& 6& 1& 2 \\ 1& 2& 0& 2& -1& -3 \\ \end{matrix}\right] \longrightarrow \left[ \begin{matrix} 1& 2& 0& 2& 0& -1\\ 0& 0& 1& -1& 0& -1\\ 0& 0& 0& 0& 1& 2\\ 0& 0& 0& 0& 0& 0\\ \end{matrix}\right] \]
28.2.8
Consider the matrix \[ A = \left[ \begin{array}{cccc} 1 & 5 & 2 & -4 \\ 3 & 10 & 2 & 8 \\ 4 & 15 & 4 & 4 \end{array} \right] \] Find a basis for \(\mathrm{Col}(A)\). Find a basis for \(\mathrm{Nul}(A)\).
28.2.9
Are the vectors in \({\mathcal B}\) a basis of \(\mathbb{R}^3\)? If not, find a basis of \(\mathbb{R}^3\) that consists of as many of the vectors from \({\mathcal B}\) as is possible. Explain your reasoning. \[ \mathcal{B}=\left\{ \begin{bmatrix} 1 \\ -1 \\ -2 \end{bmatrix},\begin{bmatrix} 2 \\ -1 \\ 1 \end{bmatrix},\begin{bmatrix} -1 \\ -1 \\ -8 \end{bmatrix} \right\} \]
28.2.10
Find the coordinates of \(\mathsf{w}\) in the standard basis and of \(\mathsf{v}\) in the \(\mathcal{B}\)-basis. \[ \mathcal{B} = \left\{ \mathsf{v}_1=\begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}, \mathsf{v}_2=\begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix}, \mathsf{v}_3=\begin{bmatrix} 1 \\ 1 \\ 1 \\ 0 \end{bmatrix}, \mathsf{v}_4=\begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} \right\}, \] \[ \mathsf{w} = \begin{bmatrix} 3 \\ -2 \\ 0 \\ -1 \end{bmatrix}_{\mathcal{B}}, \qquad \mathsf{v} = \begin{bmatrix} 10 \\ 9 \\ 7 \\ 4 \end{bmatrix}_{\mathcal{S}} \]
28.2.11
The subspace \(S \subset \mathbb{R}^5\) is given by \[ \mathsf{S} = \mathsf{span} \left( \begin{bmatrix}1\\ 1\\ 0\\ -1\\ 2 \end{bmatrix}, \begin{bmatrix} 0\\ 1\\ 1\\ 1\\ 1 \end{bmatrix}, \begin{bmatrix} 3\\ 1\\ -2\\ -5\\ 4 \end{bmatrix}, \begin{bmatrix} 1\\ 0\\ 1\\ 0\\ 1 \end{bmatrix}, \begin{bmatrix} 2\\ -1\\ -1\\ -3\\ 1 \end{bmatrix}, \right)\]
Use the following matrix to find a basis for \(S\). What is the dimension of \(S\)? \[ A=\left[ \begin{array}{ccccc} 1 & 0 & 3 & 1 & 2 \\ 1 & 1 & 1 & 0 & -1 \\ 0 & 1 & -2 & 1 & -1 \\ -1 & 1 & -5 & 0 & -3 \\ 2 & 1 & 4 & 1 & 1 \\ \end{array} \right] \rightarrow \left[ \begin{array}{ccccc} 1 & 0 & 3 & 0 & 1 \\ 0 & 1 & -2 & 0 & -2 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ \end{array} \right] \]
Find a basis for \(\mathrm{Nul}(A)\). What is the dimension of this nullspace?
28.2.12
A \(6 \times 8\) matrix \(A\) contains 5 pivots. For each of \(\mathrm{Col}(A)\) and \(\mathrm{Nul}(A)\),
- Determine the dimension of the subspace,
- Indicate whether it is subspace of \(\mathbb{R}^6\) or \(\mathbb{R}^8\), and
- Decide how you would find a basis of the subspace.
28.3 Solutions to Practice Problems
28.3.1 ## Properties of Linear Transformations
\[ \begin{array} {|c|c|c|c|c|} \hline \text{transformation} & n & m & \text{one-to-one?} & \text{onto?} \\ \hline T_A &5&5& No & No \\ \hline T_B &4&5&Yes& No\\ \hline T_C &4&4&Yes&Yes \\ \hline T_D &4&3&No&Yes\\ \hline \end{array} \hskip5in \]
28.3.2
The inverse is \[ \begin{bmatrix} 0 & 1 & 0 \\ -5/2 & 1/2 &1 \\ -2 & 0 & 1 \end{bmatrix} \]
28.3.3
No \(T\) cannot be an onto mapping by the Invertible Matrix Theorem. Since \(T\) is not one-to-one, the mapping cannot be onto.
28.3.4
Hint: subtract the two representations of \(\mathsf{v}\).
28.3.5
- True
Since \(U\) and \(W\) are subspaces, we know that \(\mathbb{0} \in U\) and \(\mathbb{0} \in W\). Therefore \(\mathbb{0} \in U \cap W\).
Let \(\mathsf{v}_1 \in U \cap W\) and \(\mathsf{v}_2 \in U \cap W\).
We know that \(\mathsf{v}_1 \in U\) and \(\mathsf{v}_2 \in U\). Since \(U\) is a subspace, we have \(\mathsf{v}_1 + \mathsf{v}_2 \in U\).
We know that \(\mathsf{v}_1 \in W\) and \(\mathsf{v}_2 \in W\). Since \(W\) is a subspace, we have \(\mathsf{v}_1 + \mathsf{v}_2 \in W\). Therefore \(\mathsf{v}_1 + \mathsf{v}_2 \in U \cap W\).
Let \(\mathsf{v} \in U \cap W\) and \(c \in \mathbb{R}\).
We know that \(\mathsf{v} \in U\) and \(c \in R\). Since \(U\) is a subspace, we have \(c \mathsf{v} \in U\).
We know that \(\mathsf{v} \in W\) and \(c \in R\). Since \(W\) is a subspace, we have \(c \mathsf{v} \in W\).
Therefore \(c \mathsf{v} \in U \cap W\).
False. Here is an example that shows this is not always true. Let \(V= \mathbb{R}^2\), \(U = \{ { x \choose 0} \mid x \in \mathbb{R} \}\) and \(W= \{ { 0 \choose y} \mid y \in \mathbb{R} \}\). The set \(U \cup W\) is not closed under addition. For example, \({1 \choose 0} + {0 \choose 1} = { 1 \choose 1} \notin U \cup W\).
True.
Since \(U\) and \(W\) are subspaces, we know that \(\mathbb{0} \in U\) and \(\mathbb{0} \in W\). Therefore \(\mathbb{0} = \mathbb{0} + \mathbb{0} \in U + W\).
Let \(\mathsf{u}_1 + \mathsf{w}_1 \in U + W\) and \(\mathsf{u}_1 + \mathsf{w}_2 \in U + W\), where \(\mathsf{u}_1, \mathsf{u}_2 \in U\) and \(\mathsf{w}_1, \mathsf{w}_2 \in W\). Then \[ (\mathsf{u}_1 + \mathsf{w}_1) + (\mathsf{u}_2 + \mathsf{w}_2) = (\mathsf{u}_1 + \mathsf{u}_2) + (\mathsf{w}_1 + \mathsf{w}_2) \] and \(\mathsf{u}_3 = (\mathsf{u}_1 + \mathsf{u}_2) \in U\) (because \(U\) is a subspace) and \(\mathsf{w}_3 = (\mathsf{w}_1 + \mathsf{w}_2) \in W\) (because \(W\) is a subspace).
Therefore \((\mathsf{u}_1 + \mathsf{v}_1) + (\mathsf{u}_2 + \mathsf{w}_2) = \mathsf{u}_3 + \mathsf{w}_3 \in U + W\).
Let \(\mathsf{u} + \mathsf{w} \in U + W\) and \(c \in \mathbb{R}\). Then \(c(\mathsf{u} + \mathsf{w}) = c \mathsf{u} + c \mathsf{w}\). We know that \(c \mathsf{u} \in U\) (since \(U\) is a subspace) and \(c \mathsf{w} \in W\) (since \(W\) is a subspace). Therefore \(c(\mathsf{u} + \mathsf{w}) = c \mathsf{u} + c \mathsf{w} \in U+W\).
28.3.6
- Suppose that \(c_1 T(\mathsf{v}_1) + c_2 T(\mathsf{v}_2) + c_3 T(\mathsf{v}_3) = 0\). We must show that \(c_1 = c_2 = c_3 = 0\).
- Since \(T\) is a linear transformation, this means that \(T(c_1 \mathsf{v}_1+ c_2 \mathsf{v}_2 + c_3 \mathsf{v}_3 )= 0\).
- Since \(T\) is one-to-one and \(T(\mathbf{0}) = \mathbf{0}\), we must have \(c_1 \mathsf{v}_1+ c_2 \mathsf{v}_2 + c_3 \mathsf{v}_3 = \mathbf{0}\).
- Because \(\mathsf{v}_1, \mathsf{v}_2, \mathsf{v}_3\) are linearly independent, this means that \(c_1 = c_2 = c_3 = 0\).
This proves that \(T(\mathsf{v}_1), T(\mathsf{v}_2), T(\mathsf{v}_3)\) are linearly independent.
- Given \(\mathsf{w} \in W\). We must show that there exist constants \(c_1, c_2, c_3\) such that \(\mathsf{w} = c_1 T(\mathsf{v}_1) + c_2 T(\mathsf{v}_2) + c_3 T(\mathsf{v}_3)\). Here we go!
- Since \(T\) is onto, we know that there exists \(\mathsf{v} \in \mathbb{R}^n\) such that \(T(\mathsf{v}) = \mathsf{w}\).
- Since \(\mathsf{v}_1, \mathsf{v}_2, \mathsf{v}_3\) span \(\mathbb{R}^n\), we know that there exist constants \(c_1, c_2, c_3\) such that \(\mathsf{v}= c_1 \mathsf{v}_1 + c_2 \mathsf{v}_2 + c_3 \mathsf{v}_k\)
- Therefore \[ \mathsf{w} = T(\mathsf{v})= T(c_1 \mathsf{v}_1 + c_2 \mathsf{v}_2 + c_3 \mathsf{v}_k) = c_1 T(\mathsf{v}_1) + c_2 T(\mathsf{v}_2) + c_3 T(\mathsf{v}_k) \] because \(T\) is a linear transformation. This proves that \(T(\mathsf{v}_1), T(\mathsf{v}_2), T(\mathsf{v}_3)\) span \(W\).
28.3.7
A basis for \(\mathrm{Col}(A)\) is \[ \begin{bmatrix} 1 \\1 \\ 2 \\ 1 \end{bmatrix}, \quad \begin{bmatrix} 0 \\1 \\ -2 \\ 0 \end{bmatrix}, \quad \begin{bmatrix} 0 \\0 \\ 1 \\ -1 \end{bmatrix} \] and a basis for \(\mathrm{Nul}(A)\) is \[ \begin{bmatrix} -2 \\1 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}, \quad \begin{bmatrix} -2 \\0 \\ 1 \\ 1 \\ 0 \\ 0 \end{bmatrix}, \quad \begin{bmatrix} 1 \\0 \\ 1\\ 0 \\ -2 \\ 1 \end{bmatrix}. \]
28.3.8
Using RStudio we find:
## [,1] [,2] [,3] [,4]
## [1,] 1 5 2 -4
## [2,] 3 10 2 8
## [3,] 4 15 4 4
## [,1] [,2] [,3] [,4]
## [1,] 1 0 -2.0 16
## [2,] 0 1 0.8 -4
## [3,] 0 0 0.0 0
A basis for \(\mathrm{Col}(A)\) is \[ \begin{bmatrix} 1 \\ 3 \\ 4 \end{bmatrix}, \quad \begin{bmatrix} 5 \\ 10 \\ 15 \end{bmatrix}. \]
A basis for \(\mathrm{Nul}(A)\) is \[ \begin{bmatrix} 2 \\ -0.8 \\ 1 \\ 0 \end{bmatrix}, \quad \begin{bmatrix} -16 \\ 4 \\ 0 \\ 1 \end{bmatrix}. \]
28.3.9
= cbind(c(1,-1,-2),c(2,-1,1),c(-1,-1,-8))
A A
## [,1] [,2] [,3]
## [1,] 1 2 -1
## [2,] -1 -1 -1
## [3,] -2 1 -8
rref(A)
## [,1] [,2] [,3]
## [1,] 1 0 3
## [2,] 0 1 -2
## [3,] 0 0 0
No they are not a basis. The corresponding matrix only has two pivots. Let’s add the three elementary vectors to create matrix \(B\) and then row reduce.
= cbind(A, c(1,0,0),c(0,1,0),c(0,0,1))
B B
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 1 2 -1 1 0 0
## [2,] -1 -1 -1 0 1 0
## [3,] -2 1 -8 0 0 1
rref(B)
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 1 0 3 0 -0.3333333 -0.3333333
## [2,] 0 1 -2 0 -0.6666667 0.3333333
## [3,] 0 0 0 1 1.6666667 -0.3333333
From this matrix, we can see that the vectors \[ \begin{bmatrix} 1 \\ -1 \\ -2 \end{bmatrix}, \quad \begin{bmatrix} 2 \\ -1 \\ 1 \end{bmatrix}, \quad \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}. \] are linearly independent because they correspond to the basis columns of \(B\).
28.3.10
We use the change of basis matrix. \[ P_{\cal B} = \begin{bmatrix} 1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{bmatrix} \] Then, the desired coordinate vectors are \[ \mathsf{w} = \begin{bmatrix} 0 \\ -3 \\ -1 \\ -1 \end{bmatrix}_{\mathcal{S}}, \qquad \mathsf{v} = \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \end{bmatrix}_{\mathcal{B}} \] You can find these vectors by multiplying by \(P_\mathcal{B}\) and by augmenting and row reducing as seen here.
= cbind(c(1,0,0,0),c(1,1,0,0),c(1,1,1,0),c(1,1,1,1))
A = c(3,-2,0,-1)
w = c(10,9,7,4)
v %*% w A
## [,1]
## [1,] 0
## [2,] -3
## [3,] -1
## [4,] -1
= cbind(A,v)
Av rref(Av)
## v
## [1,] 1 0 0 0 1
## [2,] 0 1 0 0 2
## [3,] 0 0 1 0 3
## [4,] 0 0 0 1 4
Or we can use the inverse of \(P_\mathcal{B}\). \[ P_{\cal B}^{-1} = \begin{bmatrix} 1 & -1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 1 &-1 \\ 0 & 0 & 0 & 1 \end{bmatrix} \]
= solve(A)
Ainv %*% v Ainv
## [,1]
## [1,] 1
## [2,] 2
## [3,] 3
## [4,] 4
28.3.11
\(\dim(S) = 3\) and a basis for \(S\) is \[ \begin{bmatrix} 1 \\ 1 \\ 0 \\ -1 \\2 \end{bmatrix}, \quad \begin{bmatrix} 0 \\ 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}, \quad \begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \\1 \end{bmatrix}. \]
\(\dim(\mathrm{Nul}(A))=2\) and a basis is \[ \begin{bmatrix} -3 \\ 2 \\ 1 \\ 0 \\0\end{bmatrix}, \quad \begin{bmatrix} -1 \\ 2 \\ 0 \\ -1 \\1 \end{bmatrix}. \]
28.3.12
- \(\mathrm{Col}(A)\) has dimension 5, and it is a subspace of \(\mathbb{R}^6\). You would find a basis by taking the pivot columns of \(A\).
- \(\mathrm{Nul}(A)\) has dimension 3, and it is a subspace of \(\mathbb{R}^8\). You would find a basis by finding the parametric solution to \(A \mathsf{x}= \mathbb{0}\).