疾速回顾集成办法中的软投票和硬投票
集成办法是将两个或多个独自的机器学习算法的后果联合在一起,并试图产生比任何单个算法都精确的后果。
在软投票中,每个类别的概率被均匀以产生后果。例如,如果算法 1 以 40% 的概率预测对象是一块岩石,而算法 2 以 80% 的概率预测它是一个岩石,那么集成将预测该对象是一个具备 (80 + 40) / 2 = 60% 的岩石 可能性。
在硬投票中,每个算法的预测都被认为是抉择具备最高票数的类的汇合。例如,如果三个算法将特定葡萄酒的色彩预测为“红色”、“红色”和“红色”,则集成将预测“红色”。
最简略的解释是:软投票是概率的集成,硬投票是后果标签的集成。
生成测试数据
上面咱们开始代码的编写,首先导入一些库和一些简略的配置
importpandasaspdimportnumpyasnpimportcopyascpfromsklearn.datasetsimportmake_classificationfromsklearn.model_selectionimportKFold, cross_val_scorefromtypingimportTuplefromstatisticsimportmodefromsklearn.ensembleimportVotingClassifierfromsklearn.metricsimportaccuracy_scorefromsklearn.linear_modelimportLogisticRegressionfromsklearn.ensembleimportRandomForestClassifierfromsklearn.ensembleimportExtraTreesClassifierfromxgboostimportXGBClassifierfromsklearn.neural_networkimportMLPClassifierfromsklearn.svmimportSVCfromlightgbmimportLGBMClassifierRANDOM_STATE : int = 42N_SAMPLES : int = 10000N_FEATURES : int = 25N_CLASSES : int = 3N_CLUSTERS_PER_CLASS : int = 2 FEATURE_NAME_PREFIX : str = "Feature"TARGET_NAME : str = "Target" N_SPLITS : int = 5 np.set_printoptions(suppress=True)
还须要一些数据作为分类的输出。make_classification_dataframe 函数将数据创立蕴含特色和指标的测试数据。
这里咱们设置类别数为 3。这样就能够实现多分类算法(超过2类都能够)的软投票和硬投票算法。并且咱们的代码也能够实用于二元的分类。
defmake_classification_dataframe(n_samples : int = 10000, n_features : int = 25, n_classes : int = 2, n_clusters_per_class : int = 2, feature_name_prefix : str = "Feature", target_name : str = "Target", random_state : int = 42) ->pd.DataFrame: X, y = make_classification(n_samples=n_samples, n_features=n_features, n_classes=n_classes, n_informative = n_classes*n_clusters_per_class, random_state=random_state) feature_names = [feature_name_prefix+" "+str(v) forvinnp.arange(1, n_features+1)] returnpd.concat([pd.DataFrame(X, columns=feature_names), pd.DataFrame(y, columns=[target_name])], axis=1)df_data = make_classification_dataframe(n_samples=N_SAMPLES, n_features=N_FEATURES, n_classes=N_CLASSES, n_clusters_per_class=N_CLUSTERS_PER_CLASS, feature_name_prefix=FEATURE_NAME_PREFIX, target_name=TARGET_NAME, random_state=RANDOM_STATE)X = df_data.drop([TARGET_NAME], axis=1).to_numpy()y = df_data[TARGET_NAME].to_numpy()df_data.head()
生成的数据如下:
穿插验证
应用穿插验证而不是 train_test_split,是因为能够提供更强壮的算法性能评估。
cross_val_predict 辅助函数提供了执行此操作的代码:
defcross_val_predict(model, kfold : KFold, X : np.array, y : np.array) ->Tuple[np.array, np.array, np.array]: model_ = cp.deepcopy(model) no_classes = len(np.unique(y)) actual_classes = np.empty([0], dtype=int) predicted_classes = np.empty([0], dtype=int) predicted_proba = np.empty([0, no_classes]) fortrain_ndx, test_ndxinkfold.split(X): train_X, train_y, test_X, test_y = X[train_ndx], y[train_ndx], X[test_ndx], y[test_ndx] actual_classes = np.append(actual_classes, test_y) model_.fit(train_X, train_y) predicted_classes = np.append(predicted_classes, model_.predict(test_X)) try: predicted_proba = np.append(predicted_proba, model_.predict_proba(test_X), axis=0) except: predicted_proba = np.append(predicted_proba, np.zeros((len(test_X), no_classes), dtype=float), axis=0) returnactual_classes, predicted_classes, predicted_proba
在 predict_proba 中增加了 try 是因为并非所有算法都反对概率,并且没有统一的正告或谬误能够显式捕捉。
在开始之前,疾速看一下单个算法的 cross_val_predict ..
lr = LogisticRegression(random_state=RANDOM_STATE)kfold = KFold(n_splits=N_SPLITS, random_state=RANDOM_STATE, shuffle=True)%timeactual, lr_predicted, lr_predicted_proba = cross_val_predict(lr, kfold, X, y)print(f"Accuracy of Logistic Regression: {accuracy_score(actual, lr_predicted)}")lr_predictedWalltime: 309msAccuracyofLogisticRegression: 0.6821array([0, 0, 1, ..., 0, 2, 1])
函数cross_val_predict 已返回概率和预测类别,预测类别已显示在单元格输入中。
第一组数据被预测为属于 0 , 0,1 等等。
多个分类器进行预测
下一件事是为几个分类器生成一组预测和概率,这里抉择的算法是随机森林、XGboost等
defcross_val_predict_all_classifiers(classifiers : dict) ->Tuple[np.array, np.array]: predictions = [None] *len(classifiers) predicted_probas = [None] *len(classifiers) fori, (name, classifier) inenumerate(classifiers.items()): %timeactual, predictions[i], predicted_probas[i] = cross_val_predict(classifier, kfold, X, y) print(f"Accuracy of {name}: {accuracy_score(actual, predictions[i])}") returnactual, predictions, predicted_probasclassifiers = dict()classifiers["Random Forest"] = RandomForestClassifier(random_state=RANDOM_STATE)classifiers["XG Boost"] = XGBClassifier(use_label_encoder=False, eval_metric='logloss', random_state=RANDOM_STATE)classifiers["Extra Random Trees"] = ExtraTreesClassifier(random_state=RANDOM_STATE)actual, predictions, predicted_probas = cross_val_predict_all_classifiers(classifiers)Walltime: 17.1sAccuracyofRandomForest: 0.8742Walltime: 24.6sAccuracyofXGBoost: 0.8838Walltime: 6.2sAccuracyofExtraRandomTrees: 0.8754
predictions 变量是一个蕴含每个算法的一组预测类的列表:
[array([2, 0, 0, ..., 0, 2, 1]),array([2, 0, 2, ..., 0, 2, 1], dtype=int64),array([2, 0, 0, ..., 0, 2, 1])]
predict_probas 也是列表,然而其中蕴含每个预测指标的概率。每个数组都是(10000, 3)的,其中:
- 10,000 是样本数据集中的数据点数。每个数组对于每组数据都有一行
- 3 是非二元分类器中的类数(因为咱们的指标是3个类)
[array([[0.17, 0.02, 0.81], [0.58, 0.07, 0.35], [0.54, 0.1 , 0.36], ..., [0.46, 0.08, 0.46], [0.15, 0. , 0.85], [0.01, 0.97, 0.02]]),array([[0.05611309, 0.00085733, 0.94302952], [0.95303732, 0.00187497, 0.04508775], [0.4653917 , 0.01353438, 0.52107394], ..., [0.75208634, 0.0398241 , 0.20808953], [0.02066649, 0.00156501, 0.97776848], [0.00079027, 0.99868006, 0.00052966]]),array([[0.33, 0.02, 0.65], [0.54, 0.14, 0.32], [0.51, 0.17, 0.32], ..., [0.52, 0.06, 0.42], [0.1 , 0.03, 0.87], [0.05, 0.93, 0.02]])]
下面输入中的第一行具体解释如下
对于第一种算法的第一组数据的预测(即DataFrame中的第一行有17%的概率属于0类,2%的概率属于1类,81%的概率属于2类(三类相加是100%)。
软投票和硬投票
当初进入本文的主题。只需几行 Python 代码即可实现软投票和硬投票。
defsoft_voting(predicted_probas : list) ->np.array: sv_predicted_proba = np.mean(predicted_probas, axis=0) sv_predicted_proba[:,-1] = 1-np.sum(sv_predicted_proba[:,:-1], axis=1) returnsv_predicted_proba, sv_predicted_proba.argmax(axis=1)defhard_voting(predictions : list) ->np.array: return [mode(v) forvinnp.array(predictions).T]sv_predicted_proba, sv_predictions = soft_voting(predicted_probas)hv_predictions = hard_voting(predictions)fori, (name, classifier) inenumerate(classifiers.items()): print(f"Accuracy of {name}: {accuracy_score(actual, predictions[i])}") print(f"Accuracy of Soft Voting: {accuracy_score(actual, sv_predictions)}")print(f"Accuracy of Hard Voting: {accuracy_score(actual, hv_predictions)}")AccuracyofRandomForrest: 0.8742AccuracyofXGBoost: 0.8838AccuracyofExtraRandomTrees: 0.8754AccuracyofSoftVoting: 0.8868AccuracyofHardVoting: 0.881
下面代码能够看到软投票比性能最佳的单个算法高出 0.3%(88.68% 对 88.38%),而硬投票却有所升高 (88.10% 对 88.38%),上面就对这两种机制做具体的解释
投票算法代码实现
软投票
sv_predicted_proba = np.mean(predicted_probas, axis=0)sv_predicted_probaarray([[0.18537103, 0.01361911, 0.80100984], [0.69101244, 0.07062499, 0.23836258], [0.50513057, 0.09451146, 0.40035798], ..., [0.57736211, 0.05994137, 0.36269651], [0.09022216, 0.01052167, 0.89925616], [0.02026342, 0.96622669, 0.01350989]])
numpy mean 函数沿轴 0 (列)取平均值。从实践上讲,这应该是软投票的全部内容,因为这曾经创立了 3 组输入中的每组输入的平均值(均值)并且看起来是正确的。
print(np.sum(sv_predicted_proba[0]))sv_predicted_proba[0]0.9999999826153119array([0.18537103, 0.01361911, 0.80100984])
然而因为四舍五入的误差,行的值并不总是加起来为 1,因为每个数据点都属于概率和为 1 的三个类之一
如果咱们应用topk的办法获取分类标签,这种误差不会有任何的影响。然而有时候还须要进行其余解决,必须要保障概率为1,那么就须要做一些简略的解决:将最初一列中的值设置为 1- 其余列中值的总和
sv_predicted_proba[:,-1] = 1-np.sum(sv_predicted_proba[:,:-1], axis=1)1.0array([0.18537103, 0.01361911, 0.80100986])
当初数据都没有问题了,每一行的概率加起来就是 1,就像它们应该的那样。
上面就是应用numpy 的 argmax 函数获取概率最大的类别作为预测的后果(即对于每一行,软投票是否预测类别 0、1 或 2)。
sv_predicted_proba.argmax(axis=1)array([2, 0, 0, ..., 0, 2, 1], dtype=int64)
argmax 函数是沿axis参数中指定的轴抉择数组中最大值的索引,因而它为第一行抉择 2,为第二行抉择 0,为第三行抉择0等。
硬投票
hv_predicted = [mode(v) for v in np.array(predictions).T]
其中np.array(predictions).T 语法只是转置数组,将(10000, 3)变为(3,10000 )
print(np.array(predictions).shape)np.array(predictions).T(3, 10000)array([[2, 2, 2], [0, 0, 0], [0, 2, 0], ..., [0, 0, 0], [2, 2, 2], [1, 1, 1]], dtype=int64)
而后列表推导获取每个元素(行)并将 statistics.mode 利用于它,从而抉择从算法中取得最多票的分类......
np.array(hv_predicted)array([2, 0, 0, ..., 0, 2, 1], dtype=int64)
应用 Scikit-Learn
从头编写代码能够让咱们更好的链接算法的实现机制,然而不须要反复的制作轮子, scikit-learn的VotingClassifier 能够实现所有工作
estimators = list(classifiers.items()) vc_sv = VotingClassifier(estimators=estimators, voting="soft")vc_hv = VotingClassifier(estimators=estimators, voting="hard")%timeactual, vc_sv_predicted, vc_sv_predicted_proba = cross_val_predict(vc_sv, kfold, X, y)%timeactual, vc_hv_predicted, _ = cross_val_predict(vc_hv, kfold, X, y)print(f"Accuracy of SciKit-Learn Soft Voting: {accuracy_score(actual, vc_sv_predicted)}")print(f"Accuracy of SciKit-Learn Hard Voting: {accuracy_score(actual, vc_hv_predicted)}")Walltime: 1min4sWalltime: 55.3sAccuracyofSciKit-LearnSoftVoting: 0.8868AccuracyofSciKit-LearnHardVoting: 0.881
cikit-learn 实现产生的后果与咱们手写的算法完全相同——软投票准确率为 88.68%,硬投票准确率为 88.1%。
为什么硬投票的成果不好呢?
想想这样一个状况,以二元分类为例:
B的概率别离为0.01,0.51,0.6,0.1,0.7,则
A的概率为0.99,0.49,0.4,0.9,0.3
如果以硬投票计算:
B 3票,A 2票,后果为B
软投票的计算
B:(0.01+0.51+0.6+0.1+0.7)/5=0.38
A:(0.99+0.49+0.4+0.9+0.3)/5=0.61
后果为A
硬投票的后果最终由概率值绝对较低的模型(0.51,0.6,0.7)决定,而软投票则由概率值较高的(0.99,0.9)模型决定,软投票会给使那些概率高模型取得更多的权重,所以体现要比硬投票好。
集成学习到底能有多大的晋升?
咱们看看集成学习到底能够在准确度度量上实现多少改良呢?应用常见的6个算法看看咱们能够从集成中挤出多少性能......
lassifiers = dict()classifiers["Random Forrest"] = RandomForestClassifier(random_state=RANDOM_STATE)classifiers["XG Boost"] = XGBClassifier(use_label_encoder=False, eval_metric='logloss', random_state=RANDOM_STATE)classifiers["Extra Random Trees"] = ExtraTreesClassifier(random_state=RANDOM_STATE)classifiers['Neural Network'] = MLPClassifier(max_iter = 1000, random_state=RANDOM_STATE)classifiers['Support Vector Machine'] = SVC(probability=True, random_state=RANDOM_STATE)classifiers['Light GMB'] = LGBMClassifier(random_state=RANDOM_STATE)estimators = list(classifiers.items())
办法 1:应用手写代码
%timeactual, predictions, predicted_probas = cross_val_predict_all_classifiers(classifiers) # Get a collection of predictions and probabilities from the selected algorithmssv_predicted_proba, sv_predictions = soft_voting(predicted_probas) # Combine those collections into a single set of predictionshv_predictions = hard_voting(predictions)print(f"Accuracy of Soft Voting: {accuracy_score(actual, sv_predictions)}")print(f"Accuracy of Hard Voting: {accuracy_score(actual, hv_predictions)}")Walltime: 14.9sAccuracyofRandomForest: 0.8742Walltime: 32.8sAccuracyofXGBoost: 0.8838Walltime: 5.78sAccuracyofExtraRandomTrees: 0.8754Walltime: 3min2sAccuracyofNeuralNetwork: 0.8612Walltime: 36.2sAccuracyofSupportVectorMachine: 0.8674Walltime: 1.65sAccuracyofLightGMB: 0.8828AccuracyofSoftVoting: 0.8914AccuracyofHardVoting: 0.8851Walltime: 4min34s
办法2:应用 SciKit-Learn 和 cross_val_predict
%%timevc_sv = VotingClassifier(estimators=estimators, voting="soft")vc_hv = VotingClassifier(estimators=estimators, voting="hard")%timeactual, vc_sv_predicted, vc_sv_predicted_proba = cross_val_predict(vc_sv, kfold, X, y)%timeactual, vc_hv_predicted, _ = cross_val_predict(vc_hv, kfold, X, y)print(f"Accuracy of SciKit-Learn Soft Voting: {accuracy_score(actual, vc_sv_predicted)}")print(f"Accuracy of SciKit-Learn Hard Voting: {accuracy_score(actual, vc_hv_predicted)}")Walltime: 4min11sWalltime: 4min41sAccuracyofSciKit-LearnSoftVoting: 0.8914AccuracyofSciKit-LearnHardVoting: 0.8859Walltime: 8min52s
办法 3:应用 SciKit-Learn 和 cross_val_score
%timeprint(f"Accuracy of SciKit-Learn Soft Voting using cross_val_score: {np.mean(cross_val_score(vc_sv, X, y, cv=kfold))}")AccuracyofSciKit-LearnSoftVotingusingcross_val_score: 0.8914Walltime: 4min46s
3 种不同的办法对软投票准确性的评分达成统一,这再次阐明了咱们手写的实现是正确的。
补充:Scikit-Learn 硬投票的限度
fromcatboostimportCatBoostClassifierfromsklearn.model_selectionimportcross_val_scoreclassifiers["Cat Boost"] = CatBoostClassifier(silent=True, random_state=RANDOM_STATE)estimators = list(classifiers.items())vc_hv = VotingClassifier(estimators=estimators, voting="hard")print(cross_val_score(vc_hv, X, y, cv=kfold))
会失去一个谬误:
ValueError: couldnotbroadcastinputarrayfromshape (2000,1) intoshape (2000)
这个问题很好解决,是因为输入是(2000,1)而输出的要求是(2000),所以咱们能够应用np.squeeze将第二个维度删除即可。
总结
通过将将神经网络、反对向量机和lightGMB 退出到组合中,软投票的准确率从 88.68% 进步了 0.46% 至 89.14%,新的软投票准确率比最佳个体算法(XG Boost 为 88.38 进步了 0.76% )。增加准确度分数低于 XGBoost 的 Light GMB 将集成性能进步了 0.1%!也就是说集成不是最佳的模型也可能晋升软投票的性能。如果是 Kaggle 较量,那么 0.76% 可能是微小的,可能会让你在排行榜上飞跃。
https://www.overfit.cn/post/a4a0f83dafad46f3b4135b5faf1ca85a
作者:Graham Harrison