$k$-近邻算法
$k$ 近邻( $k$ -Nearest Neighbor, $k \mathrm{NN}$)是一种监督学习
长处:精度高、对异样值不敏感、无数据输出假设
毛病:计算复杂度高、空间复杂度高
工作原理
给定测试样本, 基于某种间隔度量找出训练集中与其最靠近的 $k$ 个训练样本, 而后基于这 $k$ 个街坊的信息来进行预测
个别流程包含:
- 计算已知类别数据集中的点与以后点之间的间隔
- 依照间隔递增秩序排序
- 选取与以后点间隔最小的k个点
- 确定前k个点所在类别的呈现频率
- 返回前k个点呈现频率最高的类别作为以后点的预测分类
分类工作中可抉择这 $k$ 个样本中呈现最多的类别标记作为预测后果
回归工作中可将这 $k$ 个样本的实值输入标记的平均值作为预测后果
$k$值的抉择
当 $k$取不同值时, 分类后果会有显著不同
K值过小:特色空间被划分为更多子空间,整体模型变简单,预测后果会对近邻点非常敏感,预测就会出错容易产生过拟合
K值过大:近邻误差会偏大,间隔较远的点也会同样对预测后果产生影响,使得预测后果产生较大偏差,此时模型容易产生欠拟合
间隔度量
若采纳不同的间隔计算形式也会导致分类后果有显著不同
对函数 $\operatorname{dist}(\cdot, \cdot),$ 若它是一个间隔度量, 则需满足一些根本性质
$$非负性:\operatorname{dist}\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{j}\right) \geqslant 0$$
$$\text { 同一性: } \operatorname{dist}\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{j}\right)=0 \text { 当且仅当 } \boldsymbol{x}_{i}=\boldsymbol{x}_{j}$$
$$\begin{aligned}&\text { 对称性: } \operatorname{dist}\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{j}\right)=\operatorname{dist}\left(\boldsymbol{x}_{j}, \boldsymbol{x}_{i}\right)\\&\text { 直递性: } \operatorname{dist}\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{j}\right) \leqslant \operatorname{dist}\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{k}\right)+\operatorname{dist}\left(\boldsymbol{x}_{k}, \boldsymbol{x}_{j}\right)\end{aligned}$$
给定样本 $\boldsymbol{x}_{i}=\left(x_{i 1} ; x_{i 2} ; \ldots ; x_{i n}\right)$ 与 $\boldsymbol{x}_{j}=\left(x_{j 1} ; x_{j 2} ; \ldots ; x_{j n}\right),$ 最罕用的是闵可夫斯基间隔(Minkowski distance)
$$\operatorname{dist}_{\mathrm{mk}}\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{j}\right)=\left(\sum_{u=1}^{n}\left|x_{i u}-x_{j u}\right|^{p}\right)^{\frac{1}{p}}$$
$p=2$ 时, 闵可夫斯基间隔即欧氏间隔 (Euclidean distance) 即两点之间的直线间隔
欧几里得空间中两点间直线间隔、实在间隔或者向量的天然长度
$$\operatorname{dist}_{\mathrm{ed}}\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{j}\right)=\left\|\boldsymbol{x}_{i}-\boldsymbol{x}_{j}\right\|_{2}=\sqrt{\sum_{u=1}^{n}\left|x_{i u}-x_{j u}\right|^{2}}$$
$p=1$ 时, 闵可夫斯基间隔即曼哈顿间隔(Manhattan distance)
在欧几里德空间的固定直角坐标系上两点所造成的线段对轴产生的投影的间隔总和,也称街区间隔
$$\operatorname{dist}_{\operatorname{man}}\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{j}\right)=\left\|\boldsymbol{x}_{i}-\boldsymbol{x}_{j}\right\|_{1}=\sum_{u=1}^{n}\left|x_{i u}-x_{j u}\right|$$
代码实现
FROM IMOOC机器学习
import numpy as npfrom math import sqrtfrom collections import Counterclass KNNClassifier: def __init__(self, k): """初始化kNN分类器""" assert k >= 1, "k must be valid" self.k = k self._X_train = None self._y_train = None def fit(self, X_train, y_train): """依据训练数据集X_train和y_train训练kNN分类器""" assert X_train.shape[0] == y_train.shape[0], \ "the size of X_train must be equal to the size of y_train" assert self.k <= X_train.shape[0], \ "the size of X_train must be at least k." self._X_train = X_train self._y_train = y_train return self def predict(self, X_predict): """给定待预测数据集X_predict,返回示意X_predict的后果向量""" assert self._X_train is not None and self._y_train is not None, \ "must fit before predict!" assert X_predict.shape[1] == self._X_train.shape[1], \ "the feature number of X_predict must be equal to X_train" y_predict = [self._predict(x) for x in X_predict] return np.array(y_predict) def _predict(self, x): """给定单个待预测数据x,返回x的预测后果值""" assert x.shape[0] == self._X_train.shape[1], \ "the feature number of x must be equal to X_train" distances = [sqrt(np.sum((x_train - x) ** 2)) for x_train in self._X_train] nearest = np.argsort(distances) topK_y = [self._y_train[i] for i in nearest[:self.k]] votes = Counter(topK_y) return votes.most_common(1)[0][0] def score(self, X_test, y_test): """依据测试数据集 X_test 和 y_test 确定以后模型的准确度""" y_predict = self.predict(X_test) return self.accuracy_score(y_test, y_predict) def __repr__(self): return "KNN(k=%d)" % self.k def accuracy_score(y_true, y_predict): """计算y_true和y_predict之间的准确率""" assert len(y_true) == len(y_predict), \ "the size of y_true must be equal to the size of y_predict" return np.sum(y_true == y_predict) / len(y_true)
参考
LP间隔、欧式间隔、曼哈顿间隔、切比雪夫间隔
什么是KNN算法?
IMOOC机器学习
机器学习-周志华
Machine Learning in Action by Peter Harrington