1、在线联网装置

      间接进入容器外部进行编辑

# 进入容器外部编辑,或者在Elasticsearch下的bin目录下执行elasticsearch-plugindocker exec -it  elasticsearch bash# 装置IK分词器插件(Github官网)elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.7.0/elasticsearch-analysis-ik-6.7.0.zip# 或者减速下载(第三方减速)elasticsearch-plugin install https://github.91chifun.workers.dev//https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.7.0/elasticsearch-analysis-ik-6.7.0.zip

      期待下载实现而后cd,而后查看是否有ik分词器

cd plugins/ls

      如果有ik分词器则装置实现,而后重新启动es而后拜访

2、离线装置

      首先下载,而后解压,如果不是容器形式装置间接解压plugins目录即可

# 下载wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.7.0/elasticsearch-analysis-ik-6.7.0.zip# 减速wget https://github.91chifun.workers.dev//https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.7.0/elasticsearch-analysis-ik-6.7.0.zip# 解压mkdir ./ikunzip elasticsearch-analysis-ik-6.7.0.zip -d ./ik/# 复制到容器内docker cp ik elasticsearch:/usr/share/elasticsearch/plugins/# 重启es节点docker restart elasticsearch

3、测试

      咱们应用kibanna或者发送申请

# 最大分词(将词以细粒度分词,搜寻分词数量多,准确)GET _analyze{  "analyzer":"ik_max_word",  "text":"我是中国人"}# 短语分词(将词拆分短语,分词数少)GET _analyze{  "analyzer":"ik_smart",  "text":"我是中国人"}

      如果返回如下的信息示意装置胜利

{  "tokens" : [    {      "token" : "我",      "start_offset" : 0,      "end_offset" : 1,      "type" : "CN_CHAR",      "position" : 0    },    {      "token" : "是",      "start_offset" : 1,      "end_offset" : 2,      "type" : "CN_CHAR",      "position" : 1    },    {      "token" : "中国人",      "start_offset" : 2,      "end_offset" : 5,      "type" : "CN_WORD",      "position" : 2    },    {      "token" : "中国",      "start_offset" : 2,      "end_offset" : 4,      "type" : "CN_WORD",      "position" : 3    },    {      "token" : "国人",      "start_offset" : 3,      "end_offset" : 5,      "type" : "CN_WORD",      "position" : 4    }  ]}