Inverse Matrix #
The inverse of a matrix is a matrix that, when multiplied with the original matrix, produces the identity matrix.
A square matrix (A) is invertible if there exists a matrix (A^{-1}) such that:
\[ AA^{-1} = A^{-1}A = I \]
Here:
- (A) is the original matrix
- (A^{-1}) is the inverse
- (I) is the identity matrix
The inverse “undoes” the effect of the matrix transformation.
When does an inverse exist? #
An inverse exists only if the determinant is non-zero.
\[ \det(A) \neq 0 \]
If ( \det(A) = 0 ), the matrix is singular and not invertible.
Geometrically, a non-invertible matrix collapses space into a lower dimension.
Inverse using adjoint (classical formula) #
If ( \det(A) \neq 0 ), the inverse can be computed using the adjoint (adjugate):
\[ A^{-1} = \frac{1}{\det(A)}\operatorname{adj}(A) \]
where the adjoint is defined as:
\[ \operatorname{adj}(A) = C^T \]
and:
- (C) is the cofactor matrix
- (C^T) is its transpose
This formula is important for exams and theory, but it is not used in practice for large matrices.
Numerical methods such as row reduction or matrix decompositions are preferred.
Determinant of the inverse #
A key identity relating determinants and inverses:
\[ \det(A^{-1}) = \frac{1}{\det(A)} \]
This identity holds only if ( \det(A) \neq 0 ).
If the determinant is small, the inverse may exist but be numerically unstable.
Why inverse matrices matter in Machine Learning #
Inverse matrices appear in:
- Solving systems of linear equations
- Least squares solutions
- Closed-form solutions in linear regression
- Covariance matrices
- Gaussian distributions
Many ML algorithms avoid explicit matrix inversion due to numerical instability.
An inverse matrix reverses a linear transformation and exists only when the determinant is non-zero.