共计 8910 个字符,预计需要花费 23 分钟才能阅读完成。
一 前言
对于一个类别特色,如果这个特色的取值十分多,则称它为高基数(high-cardinality)类别特色。在深度学习场景中,对于类别特色咱们个别采纳 Embedding 的形式,通过预训练或间接训练的形式将类别特征值编码成向量。在经典机器学习场景中,对于有序类别特色,咱们能够应用 LabelEncoder 进行编码解决,对于低基数无序类别特色(在 lightgbm 中,默认取值个数小于等于 4 的类别特色),能够采纳 OneHotEncoder 的形式进行编码,然而对于高基数无序类别特色,若间接采纳 OneHotEncoder 的形式编码,在目前成果比拟好的 GBDT、Xgboost、lightgbm 等树模型中,会呈现特色稠密性的问题,造成维度劫难,若先对类别取值进行聚类分组,而后再进行 OneHot 编码,尽管能够升高特色的维度,然而聚类分组过程须要借助较强的业务教训常识。本文介绍一种针对高基数无序类别特色十分无效的预处理办法:平均数编码(Mean Encoding)。在很多数据挖掘类比赛中,有许多人应用这种办法获得了十分优异的问题。
二 原理
平均数编码,有些中央也称之为指标编码(Target Encoding),是一种基于指标变量统计(Target Statistics)的有监督编码方式。该办法基于贝叶斯思维,用先验概率和后验概率的加权平均值作为类别特征值的编码值,实用于分类和回归场景。平均数编码的公式如下所示:
其中:
1. prior 为先验概率,在分类场景中示意样本属于某一个 \_y\_\_i_的概率
其中 \_n\_\_y\_\_i\_示意 y =\_y\_\_i_时的样本数量,\_n\_\_y_示意 y 的总数量;在回归场景下,先验概率为指标变量均值:
2. posterior 为后验概率,在分类场景中示意类别特色为 k 时样本属于某一个 \_y\_\_i_的概率
在回归场景下示意 类别特色为 k 时对应指标变量的均值。
3. _λ_为权重函数,本文中的权重函数公式相较于原论文做了变换,是一个枯燥递加函数,函数公式:
其中 输出是特色类别在训练集中呈现的次数 n,权重函数有两个参数:
① k:最小阈值,当 n = k 时,_λ_= 0.5,先验概率和后验概率的权重雷同;当 n < k 时,_λ_\> 0.5, 先验概率所占的权重更大。
② f:平滑因子,控制权重函数在拐点处的斜率,f 越大,曲线坡度越缓。上面是 k = 1 时,不同 f 对于权重函数的影响:
由图可知,f 越大,权重函数 S 型曲线越缓,正则效应越强。
对于分类问题,在计算后验概率时,指标变量有 C 个类别,就有 C 个后验概率,且满足
一个 \_y\_\_i_ 的概率值必然和其余 \_y\_\_i_ 的概率值线性相关,因而为了防止多重共线性问题,采纳平均数编码后数据集将减少 C - 1 列特色。对于回归问题,采纳平均数编码后数据集将减少 1 列特色。
三 实际
平均数编码不仅能够对单个类别特色编码,也能够对具备层次结构的类别特色进行编码。比方地区特色,国家蕴含了省,省蕴含了市,市蕴含了街区,对于街区特色,每个街区特色对应的样本数量很少,以至于每个街区特色的编码值靠近于先验概率。平均数编码通过退出不同档次的先验概率信息解决该问题。上面将以分类问题对这两个场景进行开展:
1. 单个类别特色编码:
在具体实际时能够借助 category_encoders 包,代码如下:
import pandas as pd
from category_encoders import TargetEncoder
df = pd.DataFrame({'cat': ['a', 'b', 'a', 'b', 'a', 'a', 'b', 'c', 'c', 'd'],
'target': [1, 0, 0, 1, 0, 0, 1, 1, 0, 1]})
te = TargetEncoder(cols=["cat"], min_samples_leaf=2, smoothing=1)
df["cat_encode"] = te.transform(df)["cat"]
print(df)
# 后果如下:cat target cat_encode
0 a 1 0.279801
1 b 0 0.621843
2 a 0 0.279801
3 b 1 0.621843
4 a 0 0.279801
5 a 0 0.279801
6 b 1 0.621843
7 c 1 0.500000
8 c 0 0.500000
9 d 1 0.634471
2. 层次结构类别特色编码:
对以下数据集,方位类别特色具备 {‘N’: (‘N’, ‘NE’), ‘S’: (‘S’, ‘SE’), ‘W’: ‘W’} 层级关系,以 compass 中类别 NE 为例计算 \_y\_\_i_=1,k = 2 f = 2 时编码值,计算公式如下:
其中 \_p\_1 为 HIER\_compass\_1 中类别 N 的编码值,计算能够参考单个类别特色编码: 0.74527,posterior=3/3=1,_λ_= 0.37754,则类别 NE 的编码值:0.37754 0.74527 +(1 – 0.37754) 1 = 0.90383。
代码如下:
from category_encoders import TargetEncoder
from category_encoders.datasets import load_compass
X, y = load_compass()
# 档次参数 hierarchy 能够为字典或者 dataframe
# 字典模式
hierarchical_map = {'compass': {'N': ('N', 'NE'), 'S': ('S', 'SE'), 'W': 'W'}}
te = TargetEncoder(verbose=2, hierarchy=hierarchical_map, cols=['compass'], smoothing=2, min_samples_leaf=2)
# dataframe 模式,HIER_cols 的层级程序由顶向下
HIER_cols = ['HIER_compass_1']
te = TargetEncoder(verbose=2, hierarchy=X[HIER_cols], cols=['compass'], smoothing=2, min_samples_leaf=2)
te.fit(X.loc[:,['compass']], y)
X["compass_encode"] = te.transform(X.loc[:,['compass']])
X["label"] = y
print(X)
# 后果如下,compass_encode 列为后果列:index compass HIER_compass_1 compass_encode label
0 1 N N 0.622636 1
1 2 N N 0.622636 0
2 3 NE N 0.903830 1
3 4 NE N 0.903830 1
4 5 NE N 0.903830 1
5 6 SE S 0.176600 0
6 7 SE S 0.176600 0
7 8 S S 0.460520 1
8 9 S S 0.460520 0
9 10 S S 0.460520 1
10 11 S S 0.460520 0
11 12 W W 0.403328 1
12 13 W W 0.403328 0
13 14 W W 0.403328 0
14 15 W W 0.403328 0
15 16 W W 0.403328 1
注意事项:
采纳平均数编码,容易引起过拟合,能够采纳以下办法避免过拟合:
- 增大正则项 f
- k 折穿插验证
以下为自行实现的基于 k 折穿插验证版本的平均数编码,能够利用于二分类、多分类、回归场景中对繁多类别特色或具备层次结构类别特色进行编码,该版本中用 prior 对 unknown 类别和缺失值编码。
from itertools import product
from category_encoders import TargetEncoder
from sklearn.model_selection import StratifiedKFold, KFold
class MeanEncoder:
def __init__(self, categorical_features, n_splits=5, target_type='classification',
min_samples_leaf=2, smoothing=1, hierarchy=None, verbose=0, shuffle=False,
random_state=None):
"""
Parameters
----------
categorical_features: list of str
the name of the categorical columns to encode.
n_splits: int
the number of splits used in mean encoding.
target_type: str,
'regression' or 'classification'.
min_samples_leaf: int
For regularization the weighted average between category mean and global mean is taken. The weight is
an S-shaped curve between 0 and 1 with the number of samples for a category on the x-axis.
The curve reaches 0.5 at min_samples_leaf. (parameter k in the original paper)
smoothing: float
smoothing effect to balance categorical average vs prior. Higher value means stronger regularization.
The value must be strictly bigger than 0. Higher values mean a flatter S-curve (see min_samples_leaf).
hierarchy: dict or dataframe
A dictionary or a dataframe to define the hierarchy for mapping.
If a dictionary, this contains a dict of columns to map into hierarchies. Dictionary key(s) should be the column name from X
which requires mapping. For multiple hierarchical maps, this should be a dictionary of dictionaries.
If dataframe: a dataframe defining columns to be used for the hierarchies. Column names must take the form:
HIER_colA_1, ... HIER_colA_N, HIER_colB_1, ... HIER_colB_M, ...
where [colA, colB, ...] are given columns in cols list.
1:N and 1:M define the hierarchy for each column where 1 is the highest hierarchy (top of the tree). A single column or multiple
can be used, as relevant.
verbose: int
integer indicating verbosity of the output. 0 for none.
shuffle : bool, default=False
random_state : int or RandomState instance, default=None
When `shuffle` is True, `random_state` affects the ordering of the
indices, which controls the randomness of each fold for each class.
Otherwise, leave `random_state` as `None`.
Pass an int for reproducible output across multiple function calls.
"""
self.categorical_features = categorical_features
self.n_splits = n_splits
self.learned_stats = {}
self.min_samples_leaf = min_samples_leaf
self.smoothing = smoothing
self.hierarchy = hierarchy
self.verbose = verbose
self.shuffle = shuffle
self.random_state = random_state
if target_type == 'classification':
self.target_type = target_type
self.target_values = []
else:
self.target_type = 'regression'
self.target_values = None
def mean_encode_subroutine(self, X_train, y_train, X_test, variable, target):
X_train = X_train[[variable]].copy()
X_test = X_test[[variable]].copy()
if target is not None:
nf_name = '{}_pred_{}'.format(variable, target)
X_train['pred_temp'] = (y_train == target).astype(int) # classification
else:
nf_name = '{}_pred'.format(variable)
X_train['pred_temp'] = y_train # regression
prior = X_train['pred_temp'].mean()
te = TargetEncoder(verbose=self.verbose, hierarchy=self.hierarchy,
cols=[variable], smoothing=self.smoothing,
min_samples_leaf=self.min_samples_leaf)
te.fit(X_train[[variable]], X_train['pred_temp'])
tmp_l = te.ordinal_encoder.mapping[0]["mapping"].reset_index()
tmp_l.rename(columns={"index":variable, 0:"encode"}, inplace=True)
tmp_l.dropna(inplace=True)
tmp_r = te.mapping[variable].reset_index()
if self.hierarchy is None:
tmp_r.rename(columns={variable: "encode", 0:nf_name}, inplace=True)
else:
tmp_r.rename(columns={"index": "encode", 0:nf_name}, inplace=True)
col_avg_y = pd.merge(tmp_l, tmp_r, how="left",on=["encode"])
col_avg_y.drop(columns=["encode"], inplace=True)
col_avg_y.set_index(variable, inplace=True)
nf_train = X_train.join(col_avg_y, on=variable)[nf_name].values
nf_test = X_test.join(col_avg_y, on=variable).fillna(prior, inplace=False)[nf_name].values
return nf_train, nf_test, prior, col_avg_y
def fit(self, X, y):
"""
:param X: pandas DataFrame, n_samples * n_features
:param y: pandas Series or numpy array, n_samples
:return X_new: the transformed pandas DataFrame containing mean-encoded categorical features
"""
X_new = X.copy()
if self.target_type == 'classification':
skf = StratifiedKFold(self.n_splits, shuffle=self.shuffle, random_state=self.random_state)
else:
skf = KFold(self.n_splits, shuffle=self.shuffle, random_state=self.random_state)
if self.target_type == 'classification':
self.target_values = sorted(set(y))
self.learned_stats = {'{}_pred_{}'.format(variable, target): [] for variable, target in
product(self.categorical_features, self.target_values)}
for variable, target in product(self.categorical_features, self.target_values):
nf_name = '{}_pred_{}'.format(variable, target)
X_new.loc[:, nf_name] = np.nan
for large_ind, small_ind in skf.split(y, y):
nf_large, nf_small, prior, col_avg_y = self.mean_encode_subroutine(X_new.iloc[large_ind], y.iloc[large_ind], X_new.iloc[small_ind], variable, target)
X_new.iloc[small_ind, -1] = nf_small
self.learned_stats[nf_name].append((prior, col_avg_y))
else:
self.learned_stats = {'{}_pred'.format(variable): [] for variable in self.categorical_features}
for variable in self.categorical_features:
nf_name = '{}_pred'.format(variable)
X_new.loc[:, nf_name] = np.nan
for large_ind, small_ind in skf.split(y, y):
nf_large, nf_small, prior, col_avg_y = self.mean_encode_subroutine(X_new.iloc[large_ind], y.iloc[large_ind], X_new.iloc[small_ind], variable, None)
X_new.iloc[small_ind, -1] = nf_small
self.learned_stats[nf_name].append((prior, col_avg_y))
return X_new
def transform(self, X):
"""
:param X: pandas DataFrame, n_samples * n_features
:return X_new: the transformed pandas DataFrame containing mean-encoded categorical features
"""
X_new = X.copy()
if self.target_type == 'classification':
for variable, target in product(self.categorical_features, self.target_values):
nf_name = '{}_pred_{}'.format(variable, target)
X_new[nf_name] = 0
for prior, col_avg_y in self.learned_stats[nf_name]:
X_new[nf_name] += X_new[[variable]].join(col_avg_y, on=variable).fillna(prior, inplace=False)[nf_name]
X_new[nf_name] /= self.n_splits
else:
for variable in self.categorical_features:
nf_name = '{}_pred'.format(variable)
X_new[nf_name] = 0
for prior, col_avg_y in self.learned_stats[nf_name]:
X_new[nf_name] += X_new[[variable]].join(col_avg_y, on=variable).fillna(prior, inplace=False)[nf_name]
X_new[nf_name] /= self.n_splits
return X_new
四 总结
本文介绍了一种对高基数类别特色十分无效的编码方式:平均数编码。具体的讲述了该种编码方式的原理,在理论工程利用中无效防止过拟合的办法,并且提供了一个间接上手的代码版本。
作者:京东保险 赵风龙
起源:京东云开发者社区 转载请注明起源