The Core Concepts of Linear Algebra
Linear algebra is the mathematics of data. Whether you are building 3D graphics engines, training machine learning models, or solving systems of differential equations, matrices and vectors are the foundational tools you will use. This cheat sheet covers the absolute essential formulas and theorems you need to survive your final exam.
1. Matrix Multiplication
Unlike standard multiplication, matrix multiplication is generally not commutative. That means $AB \neq BA$ in most cases.
To multiply an $m \times n$ matrix $A$ by an $n \times p$ matrix $B$, the inner dimensions must match (both are $n$). The resulting matrix $C$ will have dimensions $m \times p$.
The element $c_{ij}$ in the resulting matrix is the dot product of the $i$-th row of $A$ and the $j$-th column of $B$:
$$ c_{ij} = \sum_{k=1}^{n} a_{ik} b_{kj} $$
2. Determinants
The determinant is a scalar value that encodes important geometric properties of a square matrix. Specifically, it tells us the scaling factor of the linear transformation represented by the matrix. If $\det(A) = 0$, the transformation crushes space into a lower dimension, and the matrix is non-invertible (singular).
For a $2 \times 2$ matrix:
$$ \det \begin{pmatrix} a & b \\ c & d \end{pmatrix} = ad – bc $$
3. Eigenvalues and Eigenvectors
This is arguably the most important concept in the entire course. Most vectors are knocked off their original span when transformed by a matrix. However, certain special vectors only get stretched or squished, keeping their original direction. These are the eigenvectors, and the stretch factor is the eigenvalue.
The defining equation is:
$$ A \mathbf{v} = \lambda \mathbf{v} $$
Where $A$ is a square matrix, $\mathbf{v}$ is the eigenvector (cannot be the zero vector), and $\lambda$ is the eigenvalue.
To find the eigenvalues, you must solve the characteristic equation:
$$ \det(A – \lambda I) = 0 $$
4. Matrix Diagonalization
If an $n \times n$ matrix $A$ has $n$ linearly independent eigenvectors, it can be diagonalized. This makes computing large powers of the matrix (like $A^{100}$) incredibly easy.
The factorization is:
$$ A = P D P^{-1} $$
Where $P$ is a matrix whose columns are the eigenvectors of $A$, and $D$ is a diagonal matrix containing the corresponding eigenvalues along the main diagonal.
5. Singular Value Decomposition (SVD)
While only square matrices with independent eigenvectors can be diagonalized, every matrix (even rectangular ones) can be decomposed using SVD. This is the backbone of Principal Component Analysis (PCA) and modern data compression.
$$ A = U \Sigma V^T $$
Where $U$ and $V$ are orthogonal matrices, and $\Sigma$ is a diagonal matrix containing the singular values of $A$.
Conclusion
Linear algebra can feel abstract when you are buried in row-reduction arithmetic. But remember that every matrix is just a transformation of space, and every formula describes how that space stretches, rotates, or collapses. Keep the geometric intuition in mind, and the algebra will follow.
Start with AI, then bring in a tutor when it gets serious.
Try the same topic with MathGoose, or send the brief to a matched STEM tutor.