Special Matrices #
Certain types of matrices have special structural properties that are widely used in linear algebra and ML.
1. Symmetric Matrix #
\[ A^T = A \]- Symmetric about its main diagonal.
Used in
- Covariance matrices
- Quadratic forms
- Optimisation
2. Skew-Symmetric Matrix #
A matrix is skew-symmetric if:
\[ A^T = -A \]- All diagonal elements are zero.
This implies:
\[ a_{ii} = 0 \]3. Upper Triangular Matrix #
- All elements below the main diagonal are zero
4. Lower Triangular Matrix #
- All elements above the main diagonal are zero
Determinant of Triangular Matrices ⭐ #
For both upper and lower triangular matrices:
\[ \det(A) = \prod_{i=1}^{n} a_{ii} \]- The determinant equals the product of diagonal entries.
This makes computing determinants extremely fast for triangular matrices.
Inverse of Triangular Matrices ⭐ #
A triangular matrix is invertible if and only if:
\[ a_{ii} \neq 0 \quad \forall i \]If invertible:
\[ A^{-1} \text{ is also triangular of the same type} \]- Upper triangular → inverse is upper triangular
- Lower triangular → inverse is lower triangular
Strictly Lower Triangular Matrix #
All elements on and above the diagonal are zero:
\[ a_{ij} = 0 \quad \text{for } i \le j \]Unit Lower Triangular Matrix #
Diagonal entries are 1:
\[ a_{ii} = 1 \]Appears in LU decomposition.
Solving Systems with Triangular Matrices #
For lower triangular:
\[ A\mathbf{x} = \mathbf{b} \]Solve using forward substitution.
For upper triangular:
Use back substitution.
5. Diagonal Matrix #
\[ a_{ij} = 0 \quad \text{for } i \ne j \]- Non-zero elements only on its main diagonal.
6. Identity Matrix #
\[ a_{ij} = \begin{cases} 1, & i = j \ 0, & \text{otherwise} \end{cases} \]An identity matrix is a square matrix where:
- all diagonal entries are 1
- all other entries are 0
Example (3×3):
\[ I = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \]Property #
\[ AI = IA = A \]Determinant #
\[ \det(I) = 1 \]7. Positive Definite Matrix ⭐ #
A symmetric matrix A is positive definite if:
\[ \mathbf{x}^T A \mathbf{x} > 0 \quad \forall \mathbf{x} \neq 0 \]Properties #
A (The Matrix):
- Symmetry: must be symmetric, meaning \( A^T = A \)
- Eigenvalues: All eigenvalues are strictly positive \( \lambda_i > 0 \)
- Determinant: det(A) is positive, as it is the product of its eigenvalues \( \det(A) = \prod_i \lambda_i > 0 \)
- Matrix is invertible
- Invertibility: is non-singular, meaning \( A^{-1} \text{ exists} \)
- Diagonal Elements: All diagonal elements are positive \( a_{ii} > 0 \)
- Cholesky Decomposition exists
x (The Vector):
- Arbitrary Non-zero Vector: \( \mathbf{x} \in \mathbb{R}^n,\quad \mathbf{x} \ne 0 \)
- Quadratic Form: The expression \( \mathbf{x}^T A \mathbf{x} \) is a scalar (a real number).
- Geometric Interpretation: \( \mathbf{x}^T A \mathbf{x} > 0 \)
The vectors \( \mathbf{x} \) and \( A\mathbf{x} \) always form an acute angle.
8. Positive Semi-Definite #
\[ \mathbf{x}^T A \mathbf{x} \ge 0 \quad \forall \mathbf{x} \neq 0 \]Why Positive Definite Matters in ML #
- Covariance matrices are positive semi-definite
- Hessian matrix in optimisation
- Guarantees convexity of quadratic functions
- Appears in Gaussian distributions
Example quadratic form:
\[ f(\mathbf{x}) = \mathbf{x}^T A \mathbf{x} \]If (A) is positive definite → function has a unique global minimum.
9. Sparse Matrix #
A matrix with mostly zero entries.
Used in
- Recommender systems
- Graph representations
- NLP
- Large-scale ML
Reduces:
- Memory
- Computation time