Orthonormal Basis

Orthonormal Basis #

A basis is orthonormal if its vectors are:

  • orthogonal to each other
  • each has unit length
\[ \langle \mathbf{e}_i, \mathbf{e}_j \rangle = \begin{cases} 1 & i=j \\ 0 & i\ne j \end{cases} \]

Key Idea: Orthonormal basis = perfectly independent + perfectly scaled. This makes computations extremely simple and stable.

Why it matters: orthonormal bases make projections and computations simple.


Intuition (From Lectures) #

From analytic geometry and vector space lectures:

  • Basis gives coordinate system
  • Orthonormal basis gives perfect coordinate system

Why?

  • No overlap between directions (orthogonal)
  • No scaling distortion (unit length)

Properties of Orthonormal Basis #

For vectors:

\[ \{e_1, e_2, \dots, e_n\} \]

They satisfy:

Orthogonality #

\[ \langle e_i, e_j \rangle = 0 \quad (i \ne j) \]

Unit Norm #

\[ \|e_i\| = 1 \]

Matrix Form (Important) #

If we form a matrix:

\[ Q = [e_1 \; e_2 \; \cdots \; e_n] \]

Then:

\[ Q^T Q = I \]

This means:

  • columns are orthonormal
  • Q is an orthogonal matrix

Why Orthonormal Basis Matters #

From lectures:

1. Easy Coordinates #

Any vector can be written as:

\[ x = \sum \langle x, e_i \rangle e_i \]

No solving system required.


2. Simple Projection #

Projection onto basis vector:

\[ \text{proj}_{e_i}(x) = \langle x, e_i \rangle e_i \]

3. Numerical Stability #

  • No redundancy
  • No scaling distortion
  • Stable computations

Connection to Gram-Schmidt (From Lecture) #

From lecture discussions:

  • We can convert any independent set into orthonormal basis
  • Using Gram-Schmidt process

Steps:

  1. Start with independent vectors
  2. Make them orthogonal
  3. Normalize them

Geometric Interpretation #

  • Orthonormal basis = perpendicular axes
  • Like standard coordinate axes

Example:

\[ (1,0), (0,1) \]

These are orthonormal in 2D.


Connection to Matrix Decomposition #

From later lectures:

  • Eigen decomposition uses orthogonal eigenvectors
  • SVD produces orthonormal matrices
\[ A = U \Sigma V^T \]

Where:

  • U and V are orthonormal matrices

Orthonormal vs Normal Basis #

TypeProperty
BasisIndependent + spanning
Orthonormal basisIndependent + spanning + orthogonal + unit norm

Example #

Given vectors:

\[ v_1 = (1,0), \quad v_2 = (1,1) \]

These are not orthogonal.

After Gram-Schmidt:

\[ e_1 = (1,0), \quad e_2 = (0,1) \]

Now orthonormal.


Applications in Machine Learning #

  • PCA gives orthonormal directions
  • SVD produces orthonormal matrices
  • Feature decorrelation
  • Dimensionality reduction

Hidden Exam Pattern #

From lecture flow:

  • Orthonormal basis appears with:
    • orthogonality
    • inner product
    • projections
    • Gram-Schmidt

👉 rarely asked alone


Common Mistakes #

  • Forgetting normalization
  • Assuming orthogonal = orthonormal
  • Not checking unit length
  • Mixing up row and column orthogonality

Strategy to Prepare #

  1. Practice Gram-Schmidt
  2. Verify orthogonality using dot product
  3. Normalize vectors correctly
  4. Understand projection formula

Quick Summary Table #

ConceptFormulaMeaning
Orthogonal<e_i, e_j> = 0Independent directions
Unit norm
Orthonormalboth abovePerfect basis
Matrix formQ^T Q = IOrthogonal matrix

References #


Home | Vector Spaces