Elementary Matrices

# Elementary Matrices

 Definition: A square $n \times n$ matrix is an Elementary Matrix $E$ if it can be obtained by performing exactly one elementary row operation on the identity matrix $I_n$.

The following three matrices are all considered elementary matrices:

(1)
\begin{align} E_1 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 4 & 0\\ 0 & 0 & 1 \end{bmatrix} \quad , \quad E_2 = \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix} \quad , \quad E_3 = \begin{bmatrix} 1 & 0 & 0\\ 2 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix} \end{align}
• $E_1$ is obtained by taking $I_3$ and multiplying row 2 by 4 ($4R_2 \to R_2$).
• $E_2$ is obtained by interchanging the two rows of $I_2$ ($R_1 \leftrightarrow R_2$).
• $E_3$ is obtained by taking the second row of $I_3$ and adding 2 times the first row ($R_2 + 2R_1 \to R_2$).

In all of these cases, $E_1$, $E_2$, and $E_3$ were all obtained from taking an identity matrix and performing one elementary row operation on it to produce $E$.

 Theorem 1: Suppose that $E$ is an elementary matrix resulting from a single elementary row operation on $I$. For any matrix $A$ where the product $EA$ is defined, $EA$ will be the matrix that results when applying that same elementary row operation on $A$.

For example, consider the elementary matrix $E = \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix}$ that results from taking $I_2$ and interchanging rows $R_1 \leftrightarrow R_2$. Let's also consider the following matrix $A = \begin{bmatrix} 1 & 2\\ 3 & 4 \end{bmatrix}$.

The resulting product $EA = \begin{bmatrix} 3 & 4\\ 1 & 2 \end{bmatrix}$, which is the matrix obtained by taking $A$ and interchanging the corresponding rows $R_1 \leftrightarrow R_2$ which is what we should expect from theorem 1.

# Inverse Row Operations

Suppose that we create an elementary matrix $E$ by taking $I$ and performing a single row operation on it. If we then take $E$ and reverse that operation, we will obtain $I$ once again.

• If we create $E$ by multiplying some row by a constant $k ≠ 0$ ($kR_a \to R_a$), then the inverse operation would be to take $E$ and multiply it by $\frac{1}{k}$ ($\frac{1}{k} R_a \to R_a$).
• If we create $E$ by interchanging two rows ($R_a \leftrightarrow R_b$), then the inverse operation would be to take $E$ and interchange those same rows again ($R_b \leftrightarrow R_a$).
• If we create $E$ by adding a multiple of one row to another ($R_a + kR_b \to R_a$), then the inverse operation would be to take $E$ and subtract the multiple of that row off ($R_a - kR_b \to R_a$).

For example, consider the elementary matrix $E = \begin{bmatrix} 1 & 0 \\ 0 & 3 \end{bmatrix}$ that resulted from taking the row operation $3R_2 \to R_2$ on $I$. If we then perform the operation $\frac{1}{3}R_2 \to R_2$ on $E$, we obtain $I$ again.

# Elementary Matrix Inverses

 Theorem 2: Every elementary matrix $E$ is invertible.
• Proof: To prove theorem 2, we will look at three cases, each pertaining to the elementary row operations.
• Case 1: Suppose that $E_{n \times n}$ is obtained by multiplying a row by a constant $k \in \mathbb{R}$, $k \neq 0$. Without loss of generality, suppose we multiply the second row of $I_n$ by $k$. Then $E = \begin{bmatrix} 1 & 0 & \cdots & 0\\ 0 & k & \cdots & 0\\ \vdots & \vdots & \ddots & 0\\ 0 & 0 & \cdots & 1 \end{bmatrix}$. Let $E^{-1} = \begin{bmatrix} 1 & 0 & \cdots & 0\\ 0 & \frac{1}{k} & \cdots & 0\\ \vdots & \vdots & \ddots & 0\\ 0 & 0 & \cdots & 1 \end{bmatrix}$. Then $EE^{-1} = I_{n}$.
• Case 2 and Case 3 are left to the reader to show, that is the cases when the elementary row operation performed on $I$ is adding a multiple of one row to another, or interchanging two rows. $\blacksquare$
 Theorem 3: If $E$ is an elementary matrix, then $E^{-1}$ is an elementary matrix resulting from performing the inverse row operation from $E$ onto $I$.