Vapnik-Chervonenkis (VC) dimension

Definition. Let \({\cal C}\) be a set of classifiers. \(VC({\cal C})\) is the maximal number of points in \({\cal X}\) that can be arbitrarily classified by classifiers in \({\cal C}\).

Example: VC of linear classifiers. \({\cal C}=\{\eta({\bf x})=I\{\beta^T {\bf x} \geq 0\},\;\beta \in \mathbb{R}^d\}\)

\({\cal X}=\mathbb{R}^2, \;{\cal C}=\{\eta({\bf x})=I\{\beta_0+\beta_1 x_1 + \beta_2 x_2 \geq 0\}\;\;\;\;\;\) \(VC({\cal C})=3~ (=d)\)

\({\cal X}=\mathbb{R}^{d-1},\; \beta \in \mathbb{R}^d\; \;(x_0=1)\;\;\;\;\;\) \(VC({\cal C})=d\)

Example: VC of sine classifiers: \({\cal X}=\mathbb{R}\), \({\cal C}=\{\eta(x)=I\{x \geq \sin(\theta x),\; \theta>0 \}\)

Can classify any finite subset of points, \(VC({\cal C})=\infty\)

Surrogate convex losses