Classification

6.2.2. Classification#

Classification is the supervised learning task of predicting a discrete class label from input features. Instead of estimating a continuous quantity, the model assigns each example to one of a fixed set of categories - spam or not spam, malignant or benign, one digit out of ten.

Formally, we learn a function \(f(X) \rightarrow y\) where \(y \in \{0, 1, \ldots, K-1\}\). For binary classification there are only two classes; for multi-class classification there are \(K > 2\).

Common examples:

  • Detecting whether a tumour is malignant or benign from clinical measurements

  • Filtering email into spam vs. legitimate categories

  • Classifying handwritten digits from pixel intensities

Classifiers can be broadly grouped into two families:

  • Discriminative models (Logistic Regression, SVM, Decision Trees, k-NN) learn the decision boundary \(P(y|X)\) directly.

  • Generative models (Naïve Bayes) model the joint distribution \(P(X, y)\) and apply Bayes’ theorem to infer the class.