无论是 LRU 算法还是 LFU 算法,它们在删除淘汰数据时,实际上都会依据 Redis server 的 lazyfree-lazy-eviction 配置项,来决定是否应用 Lazy Free,也就是惰性删除。

(1) 惰性删除是什么

惰性删除是 Redis 4.0 版本后提供的性能,它会应用后盾线程来执行删除数据的工作

(2) 为什么要用惰性删除

能够防止了删除操作对主线程的阻塞。

https://github.com/redis/redi...

(3) 惰性删除怎么用

(3.1) 惰性删除的配置

当 Redis server 须要启动惰性删除时,须要在redis.conf配置文件中设置和惰性删除相干的配置项。
其中包含了四个配置项,别离对应了如下的四种场景。
lazyfree-lazy-eviction:对应缓存淘汰时的数据删除场景。
lazyfree-lazy-expire:对应过期 key 的删除场景。
lazyfree-lazy-server-del:对应会隐式进行删除操作的 server 命令执行场景。
replica-lazy-flush:对应从节点实现全量同步后,删除原有旧数据的场景。

这四个配置项的默认值都是 no。所以,如果要在缓存淘汰时启用,就须要将

lazyfree-lazy-eviction 设置为 yes。

(4) 惰性删除原理

(4.1) 被淘汰数据的删除过程

freeMemoryIfNeeded 函数(在evict.c文件中)会负责执行数据淘汰的流程。
该函数在筛选出被淘汰的键值对后,就要开始删除被淘汰的数据,这个删除过程次要分成两步。

第一步,freeMemoryIfNeeded 函数会为被淘汰的 key 创立一个 SDS 对象,而后调用 propagateExpire 函数
第二步,freeMemoryIfNeeded 函数会依据 server 是否启用了惰性删除,别离执行

// file: src/evict.c/* This function is periodically called to see if there is memory to free * according to the current "maxmemory" settings. In case we are over the * memory limit, the function will try to free some memory to return back * under the limit. * * The function returns C_OK if we are under the memory limit or if we * were over the limit, but the attempt to free memory was successful. * Otherwise if we are over the memory limit, but not enough memory * was freed to return back under the limit, the function returns C_ERR. */int freeMemoryIfNeeded(void) {    int keys_freed = 0;    // 省略局部代码 ...     // 曾经开释的内存大小 < 打算要开释的内存大小    while (mem_freed < mem_tofree) {        int j, k, i;        // sds        sds bestkey = NULL;        // 省略局部代码         // 最终移除抉择要淘汰的key        if (bestkey) {            // 抉择对应的db            db = server.db+bestdbid;            // 创立redisObject            robj *keyobj = createStringObject(bestkey,sdslen(bestkey));            // 删除            propagateExpire(db,keyobj,server.lazyfree_lazy_eviction);            /* We compute the amount of memory freed by db*Delete() alone.             * It is possible that actually the memory needed to propagate             * the DEL in AOF and replication link is greater than the one             * we are freeing removing the key, but we can't account for             * that otherwise we would never exit the loop.             *             * Same for CSC invalidation messages generated by signalModifiedKey.             *             * AOF and Output buffer memory will be freed eventually so             * we only care about memory used by the key space. */            delta = (long long) zmalloc_used_memory();            latencyStartMonitor(eviction_latency);            // 是否惰性删除            if (server.lazyfree_lazy_eviction)                dbAsyncDelete(db,keyobj);  // 异步删除            else                dbSyncDelete(db,keyobj);  // 同步删除            latencyEndMonitor(eviction_latency);            latencyAddSampleIfNeeded("eviction-del",eviction_latency);            delta -= (long long) zmalloc_used_memory();            mem_freed += delta;            server.stat_evictedkeys++;            signalModifiedKey(NULL,db,keyobj);            notifyKeyspaceEvent(NOTIFY_EVICTED, "evicted",                keyobj, db->id);            decrRefCount(keyobj);            keys_freed++;            /* When the memory to free starts to be big enough, we may             * start spending so much time here that is impossible to             * deliver data to the slaves fast enough, so we force the             * transmission here inside the loop. */            if (slaves) flushSlavesOutputBuffers();            /* Normally our stop condition is the ability to release             * a fixed, pre-computed amount of memory. However when we             * are deleting objects in another thread, it's better to             * check, from time to time, if we already reached our target             * memory, since the "mem_freed" amount is computed only             * across the dbAsyncDelete() call, while the thread can             * release the memory all the time. */            if (server.lazyfree_lazy_eviction && !(keys_freed % 16)) {                if (getMaxmemoryState(NULL,NULL,NULL,NULL) == C_OK) {                    /* Let's satisfy our stop condition. */                    mem_freed = mem_tofree;                }            }        } else {            goto cant_free; /* nothing to free... */        }    }    result = C_OK;cant_free:    /* We are here if we are not able to reclaim memory. There is only one     * last thing we can try: check if the lazyfree thread has jobs in queue     * and wait... */    if (result != C_OK) {        latencyStartMonitor(lazyfree_latency);        while(bioPendingJobsOfType(BIO_LAZY_FREE)) {            if (getMaxmemoryState(NULL,NULL,NULL,NULL) == C_OK) {                result = C_OK;                break;            }            usleep(1000);        }        latencyEndMonitor(lazyfree_latency);        latencyAddSampleIfNeeded("eviction-lazyfree",lazyfree_latency);    }    latencyEndMonitor(latency);    latencyAddSampleIfNeeded("eviction-cycle",latency);    return result;}

(4.1.1) 流传过期key-propagateExpire

// file: src/evict.c/*  * 流传 过期keys 到 从节点 和 AOF 文件。 * 当主节点中的key过期时,如果启用(),则会将对此key的DEL操作发送到 所有从节点 和 AOF 文件。 * * 这样key过期集中在一个中央,并且因为 AOF 和 主->从 链接保障操作程序, * 即便咱们容许对过期key进行写操作,所有也会保持一致。 * * @param *db redisDb * @param *key  过期key对象(redisObject格局) * @param lazy  过期策略 */void propagateExpire(redisDb *db, robj *key, int lazy) {    robj *argv[2];    argv[0] = lazy ? shared.unlink : shared.del;    argv[1] = key;    // 援用计数 +1     incrRefCount(argv[0]);    // 援用计数 +1     incrRefCount(argv[1]);    // AOF未敞开    if (server.aof_state != AOF_OFF)        feedAppendOnlyFile(server.delCommand,db->id,argv,2);  // 把删除命令追加到AOF缓存     // 将删除操作同步给从节点    replicationFeedSlaves(server.slaves,db->id,argv,2);    // 援用计数 -1     decrRefCount(argv[0]);    // 援用计数 -1     decrRefCount(argv[1]);}

(4.1.2) 惰性删除-dbAsyncDelete

// file: src/evict.c/**/int freeMemoryIfNeeded(void) {    // 省略局部代码            // 是否惰性删除            if (server.lazyfree_lazy_eviction)                dbAsyncDelete(db,keyobj);  // 异步删除            else                dbSyncDelete(db,keyobj);  // 同步删除    // 省略局部代码}


(4.2) 数据异步删除-dbAsyncDelete

// file: src/lazyfree.c// 从数据库中删除key、value 和 关联的过期entry(如果有)。// 如果有足够的内存调配来开释值对象,则能够将其放入惰性开释列表而不是同步开释。 // 惰性闲暇列表将在不同的 bio.c 线程中回收。#define LAZYFREE_THRESHOLD 64/* * 异步删除 *  * @param *db  * @param *key  */int dbAsyncDelete(redisDb *db, robj *key) {    // 从 expires 字典中删除entry(dictEntry)不会开释key的sds,因为它与主字典共享。     // 须要删2次,第一次删entry(dictEntry),第二次删key    if (dictSize(db->expires) > 0) dictDelete(db->expires,key->ptr);    // 如果该值由一些allocations组成,以惰性形式开释实际上会更慢...所以在肯定限度下咱们只是同步开释对象。     // 从字典里删除key    dictEntry *de = dictUnlink(db->dict,key->ptr);    // 如果节点不为空    if (de) {        // 获取节点的值        robj *val = dictGetVal(de);        //         size_t free_effort = lazyfreeGetFreeEffort(val);        /* If releasing the object is too much work, do it in the background         * by adding the object to the lazy free list.         * Note that if the object is shared, to reclaim it now it is not         * possible. This rarely happens, however sometimes the implementation         * of parts of the Redis core may call incrRefCount() to protect         * objects, and then call dbDelete(). In this case we'll fall         * through and reach the dictFreeUnlinkedEntry() call, that will be         * equivalent to just calling decrRefCount(). */        if (free_effort > LAZYFREE_THRESHOLD && val->refcount == 1) {            atomicIncr(lazyfree_objects,1);            bioCreateBackgroundJob(BIO_LAZY_FREE,val,NULL,NULL);            dictSetVal(db->dict,de,NULL);        }    }    /* Release the key-val pair, or just the key if we set the val     * field to NULL in order to lazy free it later. */    if (de) {        dictFreeUnlinkedEntry(db->dict,de);        if (server.cluster_enabled) slotToKeyDel(key->ptr);        return 1;    } else {        return 0;    }}
/* Return the amount of work needed in order to free an object. * The return value is not always the actual number of allocations the * object is composed of, but a number proportional to it. * * For strings the function always returns 1. * * For aggregated objects represented by hash tables or other data structures * the function just returns the number of elements the object is composed of. * * Objects composed of single allocations are always reported as having a * single item even if they are actually logical composed of multiple * elements. * * For lists the function returns the number of elements in the quicklist * representing the list. */size_t lazyfreeGetFreeEffort(robj *obj) {    if (obj->type == OBJ_LIST) {        quicklist *ql = obj->ptr;        return ql->len;    } else if (obj->type == OBJ_SET && obj->encoding == OBJ_ENCODING_HT) {        dict *ht = obj->ptr;        return dictSize(ht);    } else if (obj->type == OBJ_ZSET && obj->encoding == OBJ_ENCODING_SKIPLIST){        zset *zs = obj->ptr;        return zs->zsl->length;    } else if (obj->type == OBJ_HASH && obj->encoding == OBJ_ENCODING_HT) {        dict *ht = obj->ptr;        return dictSize(ht);    } else if (obj->type == OBJ_STREAM) {        size_t effort = 0;        stream *s = obj->ptr;        /* Make a best effort estimate to maintain constant runtime. Every macro         * node in the Stream is one allocation. */        effort += s->rax->numnodes;        /* Every consumer group is an allocation and so are the entries in its         * PEL. We use size of the first group's PEL as an estimate for all         * others. */        if (s->cgroups && raxSize(s->cgroups)) {            raxIterator ri;            streamCG *cg;            raxStart(&ri,s->cgroups);            raxSeek(&ri,"^",NULL,0);            /* There must be at least one group so the following should always             * work. */            serverAssert(raxNext(&ri));            cg = ri.data;            effort += raxSize(s->cgroups)*(1+raxSize(cg->pel));            raxStop(&ri);        }        return effort;    } else {        return 1; /* Everything else is a single allocation. */    }}

(5) 参考资料

https://weikeqin.com/tags/redis/

Redis源码分析与实战 学习笔记 Day17 Lazy Free会影响缓存替换吗?
https://time.geekbang.org/col...