Inner Products and Dot Product

Inner Products and Dot Product #

An inner product maps two vectors to a single scalar.

It allows us to measure:

  • similarity
  • vector length
  • projections
  • orthogonality

Definition #

For vectors
\( \mathbf{a}, \mathbf{b} \in \mathbb{R}^n \)

The inner product is written as:

\( \langle \mathbf{a}, \mathbf{b} \rangle \)

In \( \mathbb{R}^n \) , this is the dot product.


Dot Product Formula #

Let

\( \mathbf{a} = (a_1, \dots, a_n) \)
\( \mathbf{b} = (b_1, \dots, b_n) \)

\[ \mathbf{a}\cdot\mathbf{b} = \sum_{i=1}^{n} a_i b_i \]

Key Properties #

Let
\( \mathbf{a},\mathbf{b},\mathbf{c}\in\mathbb{R}^n \)
and \( \lambda\in\mathbb{R} \) .

1) Symmetry #

\[ \mathbf{a}\cdot\mathbf{b} = \mathbf{b}\cdot\mathbf{a} \]

2) Linearity #

\[ (\mathbf{a}+\mathbf{b})\cdot\mathbf{c} = \mathbf{a}\cdot\mathbf{c} + \mathbf{b}\cdot\mathbf{c} \]
\[ (\lambda\mathbf{a})\cdot\mathbf{b} = \lambda(\mathbf{a}\cdot\mathbf{b}) \]

3) Positivity #

\[ \mathbf{a}\cdot\mathbf{a} \ge 0 \]

Equality holds only if
\( \mathbf{a}=\mathbf{0} \) .


Norm (Length of a Vector) #

\[ \|\mathbf{a}\| = \sqrt{\mathbf{a}\cdot\mathbf{a}} \]

Important Theoretical Result #

The Cauchy–Schwarz Inequality places a bound on the dot product and guarantees that angle formulas are valid.

See: Cauchy–Schwarz Inequality


Machine Learning Connection #

The dot product appears in:

  • Linear regression
\[ \hat{y} = \mathbf{w}\cdot\mathbf{x} + b \]
  • Neural networks
  • SVM linear kernel
  • Cosine similarity
  • Gradient-based optimisation

Summary #

  • Inner product maps two vectors to a scalar
  • In \( \mathbb{R}^n \) , it is the dot product
  • Defines vector length
  • Foundation for geometry and similarity

Home | Vector Spaces