elasticsearch学习笔记三十Elasticsearch-相关度评分-TFIDF算法

26次阅读

共计 4899 个字符,预计需要花费 13 分钟才能阅读完成。

Elasticsearch 的相关度评分(relevance score)算法采用的是 term frequency/inverse document frequency 算法,简称为 TF/IDF 算法。

算法介绍

relevance score 算法,简单来说就是,就是计算出一个索引中的文本,与搜索文本,它们之间的关联匹配程度。

TF/IDF 算法,分为两个部分,IF 和 IDF

Term Frequency(TF): 搜索文本中的各个词条在 field 文本中出现了多少次,出现的次数越多,就越相关

例如:
搜索请求:hello world
doc1: hello you, and world is very good
doc2: hello, how are you
那么此时根据 TF 算法,doc1 的相关度要比 doc2 的要高

Inverse Document Frequency(IDF): 搜索文本中的各个词条在整个索引的所有文档中出现的次数,出现的次数越多,就越不相关。

搜索请求:hello world
doc1: hello, today is very good.
doc2: hi world, how are you.
比如在 index 中有 1 万条 document, hello 这个单词在所有的 document 中,一共出现了 1000 次,world 这个单词在所有的 document 中一共出现 100 次。那么根据 IDF 算法此时 doc2 的相关度要比 doc1 要高。

对于 ES 还有一个 Field-length norm

field-length norm 就是 field 长度越长,相关度就越弱
搜索请求:hello world
doc1: {“title”: “hello article”, “content”: “1 万个单词 ”}
doc2: {“title”: “my article”, “content”: “1 万个单词,hi world”}
此时 hello world 在整个 index 中出现的次数是一样多的。但是根据 Field-length norm 此时 doc1 比 doc2 相关度要高。因为 title 字段更短。

_score 是如何被计算出来的

GET /test_index/_search?explain=true
{
  "query": {
    "match": {"test_field": "hello"}
  }
}

{
  "took" : 9,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 2,
      "relation" : "eq"
    },
    "max_score" : 0.20521778,
    "hits" : [
      {"_shard" : "[test_index][0]",
        "_node" : "P-b-TEvyQOylMyEcMEhApQ",
        "_index" : "test_index",
        "_type" : "_doc",
        "_id" : "2",
        "_score" : 0.20521778,
        "_source" : {"test_field" : "hello, how are you"},
        "_explanation" : {
          "value" : 0.20521778,
          "description" : "weight(test_field:hello in 0) [PerFieldSimilarity], result of:",
          "details" : [
            {
              "value" : 0.20521778,
              "description" : "score(freq=1.0), product of:",
              "details" : [
                {
                  "value" : 2.2,
                  "description" : "boost",
                  "details" : []},
                {
                  "value" : 0.18232156,
                  "description" : "idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:",
                  "details" : [
                    {
                      "value" : 2,
                      "description" : "n, number of documents containing term",
                      "details" : []},
                    {
                      "value" : 2,
                      "description" : "N, total number of documents with field",
                      "details" : []}
                  ]
                },
                {
                  "value" : 0.5116279,
                  "description" : "tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:",
                  "details" : [
                    {
                      "value" : 1.0,
                      "description" : "freq, occurrences of term within document",
                      "details" : []},
                    {
                      "value" : 1.2,
                      "description" : "k1, term saturation parameter",
                      "details" : []},
                    {
                      "value" : 0.75,
                      "description" : "b, length normalization parameter",
                      "details" : []},
                    {
                      "value" : 4.0,
                      "description" : "dl, length of field",
                      "details" : []},
                    {
                      "value" : 5.5,
                      "description" : "avgdl, average length of field",
                      "details" : []}
                  ]
                }
              ]
            }
          ]
        }
      },
      {"_shard" : "[test_index][0]",
        "_node" : "P-b-TEvyQOylMyEcMEhApQ",
        "_index" : "test_index",
        "_type" : "_doc",
        "_id" : "1",
        "_score" : 0.16402164,
        "_source" : {"test_field" : "hello you, and world is very good"},
        "_explanation" : {
          "value" : 0.16402164,
          "description" : "weight(test_field:hello in 0) [PerFieldSimilarity], result of:",
          "details" : [
            {
              "value" : 0.16402164,
              "description" : "score(freq=1.0), product of:",
              "details" : [
                {
                  "value" : 2.2,
                  "description" : "boost",
                  "details" : []},
                {
                  "value" : 0.18232156,
                  "description" : "idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:",
                  "details" : [
                    {
                      "value" : 2,
                      "description" : "n, number of documents containing term",
                      "details" : []},
                    {
                      "value" : 2,
                      "description" : "N, total number of documents with field",
                      "details" : []}
                  ]
                },
                {
                  "value" : 0.40892193,
                  "description" : "tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:",
                  "details" : [
                    {
                      "value" : 1.0,
                      "description" : "freq, occurrences of term within document",
                      "details" : []},
                    {
                      "value" : 1.2,
                      "description" : "k1, term saturation parameter",
                      "details" : []},
                    {
                      "value" : 0.75,
                      "description" : "b, length normalization parameter",
                      "details" : []},
                    {
                      "value" : 7.0,
                      "description" : "dl, length of field",
                      "details" : []},
                    {
                      "value" : 5.5,
                      "description" : "avgdl, average length of field",
                      "details" : []}
                  ]
                }
              ]
            }
          ]
        }
      }
    ]
  }
}

匹配的文档有两个,下面直接用一个文档来分析出 ES 各个算法的公式。
从上面可以看出第一个文档的相关度分数是 0.20521778

      {"_shard" : "[test_index][0]",
        "_node" : "P-b-TEvyQOylMyEcMEhApQ",
        "_index" : "test_index",
        "_type" : "_doc",
        "_id" : "2",
        "_score" : 0.20521778,
        "_source" : {"test_field" : "hello, how are you"},
        "_explanation" : {
          "value" : 0.20521778,
          "description" : "weight(test_field:hello in 0) [PerFieldSimilarity], result of:",
          "details" : [
            {
              "value" : 0.20521778,
              "description" : "score(freq=1.0), product of:",
              "details" : [
                {
                  "value" : 2.2,
                  "description" : "boost",
                  "details" : []},
                {
                  "value" : 0.18232156,
                  "description" : "idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:",
                  "details" : [
                    {
                      "value" : 2,
                      "description" : "n, number of documents containing term",
                      "details" : []},
                    {
                      "value" : 2,
                      "description" : "N, total number of documents with field",
                      "details" : []}
                  ]
                },
                {
                  "value" : 0.5116279,
                  "description" : "tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:",
                  "details" : [
                    {
                      "value" : 1.0,
                      "description" : "freq, occurrences of term within document",
                      "details" : []},
                    {
                      "value" : 1.2,
                      "description" : "k1, term saturation parameter",
                      "details" : []},
                    {
                      "value" : 0.75,
                      "description" : "b, length normalization parameter",
                      "details" : []},
                    {
                      "value" : 4.0,
                      "description" : "dl, length of field",
                      "details" : []},
                    {
                      "value" : 5.5,
                      "description" : "avgdl, average length of field",
                      "details" : []}
                  ]
                }
              ]
            }
          ]
        }
      }
      
 通过观察我们可以知道
_score = boost * idf * tf 
此时 boost = 2.2, idf = 0.18232156, tf = 0.5116279

idf = log(1 + (N - n + 0.5) / (n + 0.5))
此时 n = 2 (n, number of documents containing term), N = 2(N, total number of documents with field)

tf = freq / (freq + k1 * (1 - b + b * dl / avgdl))
此时 freq = 1(freq, occurrences of term within document), k1 = 1.2(k1, term saturation parameter), b = 0.75(b, length normalization parameter), d1 = 4 (dl, length of field), avgdl = 5.5(avgdl, average length of field)

正文完
 0