ElasticSearch ILM试玩

创立一个生命周期管理策略

官网给的例子

PUT _ilm/policy/datastream_policy   {  "policy": {                           "phases": {      "hot": {                              "actions": {          "rollover": {                         "max_size": "50GB",            "max_age": "30d"          }        }      },      "delete": {        "min_age": "90d",                   "actions": {          "delete": {}                      }      }    }  }}

本人轻易造一个,齐全能够在Kibana界面上实现操作~

GET _ilm/policy{  "ljktest_policy" : {    "version" : 1,    "modified_date" : "2020-08-17T04:41:55.293Z",    "policy" : {      "phases" : {        "hot" : {          "min_age" : "0ms",          "actions" : {            "rollover" : {              "max_size" : "1mb",              "max_age" : "1h",              "max_docs" : 1            }          }        },        "delete" : {          "min_age" : "4h",          "actions" : {            "delete" : { }          }        }      }    }  }}

创立一个带有策略的索引正本

官网示例

PUT _template/datastream_template{  "index_patterns": ["datastream-*"],                   "settings": {    "number_of_shards": 1,    "number_of_replicas": 1,    "index.lifecycle.name": "datastream_policy",          "index.lifecycle.rollover_alias": "datastream"      }}

本人造一个

GET _template/ljktest_template{  "ljktest_policy" : {    "version" : 1,    "modified_date" : "2020-08-17T04:41:55.293Z",    "policy" : {      "phases" : {        "hot" : {          "min_age" : "0ms",          "actions" : {            "rollover" : {              "max_size" : "1mb",              "max_age" : "1h",              "max_docs" : 1            }          }        },        "delete" : {          "min_age" : "4h",          "actions" : {            "delete" : { }          }        }      }    }  }}

写入索引文档测试

PUT ljktest-000001{  "aliases": {    "ljktest": {      "is_write_index": true    }  }}POST ljktest/origin{  "name":"ljktest004"}

测试的一些问题纳闷

  1. 设置的是1个文档就rollover,间断放入了好几个文档,也不会生成新的index。比方ljktest-000001曾经有7个文档了。前面我也不懂它依据什么规定生成了一个ljktest-000002的索引。
  2. 这种场景在应答ID不确定,也就是log日志文件,不错。 然而当我的数据须要一直迭代更新的场景,然而又想依据某个工夫字段删除掉90天后的数据,这种ILM的策略貌似不能满足啊。比方以下的两笔数据会让接口同时查问到,这种就不太正当~
 {        "_index" : "ljktest-000001",        "_type" : "origin",        "_id" : "001",        "_score" : 1.0,        "_source" : {          "name" : "ljktest006"        }      },      {        "_index" : "ljktest-000002",        "_type" : "origin",        "_id" : "001",        "_score" : 1.0,        "_source" : {          "name" : "ljktest006"        }      },