共计 7151 个字符,预计需要花费 18 分钟才能阅读完成。
Redis 的执行模型,是指 Redis 运行时应用的过程、子过程和线程的个数,以及它们各自负责的工作工作。
咱们常常会听到一个问题: Redis 到底是不是一个单线程的程序?
先来看 Redis server 启动时的过程运行。
(1) Redis 过程创立
在启动 Redis 实例时
./redis-server ../redis.conf
这个命令后,它理论会调用 fork 零碎调用函数,最终会调用 Redis Server
的main
函数,来新建一个过程。
运行 Redis server 后,咱们会看到 Redis server 启动后的日志输入会打印到终端屏幕上
[weikeqin@bogon src]$ ./redis-server
77405:C 27 Jan 2023 22:11:02.194 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
77405:C 27 Jan 2023 22:11:02.195 # Redis version=6.0.9, bits=64, commit=00000000, modified=0, pid=77405, just started
77405:C 27 Jan 2023 22:11:02.195 # Warning: no config file specified, using the default config. In order to specify a config file use ./redis-server /path/to/redis.conf
77405:M 27 Jan 2023 22:11:02.197 * Increased maximum number of open files to 10032 (it was originally set to 256).
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 6.0.9 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
(' , .-` | `,) Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 77405
`-._ `-._ `-./ _.-'_.-'
|`-._`-._ `-.__.-'_.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'`-._ `-.__.-' _.-'`-._ _.-'
`-.__.-'
77405:M 27 Jan 2023 22:11:02.203 # Server initialized
77405:M 27 Jan 2023 22:11:02.203 * Loading RDB produced by version 6.0.9
77405:M 27 Jan 2023 22:11:02.203 * RDB age 284987 seconds
77405:M 27 Jan 2023 22:11:02.203 * RDB memory usage when created 0.96 Mb
77405:M 27 Jan 2023 22:11:02.204 * DB loaded from disk: 0.001 seconds
77405:M 27 Jan 2023 22:11:02.204 * Ready to accept connections
Redis 过程创立开始运行后,它就会从 main 函数开始执行。
(2) 守护过程
在 main 函数实现参数解析后,会依据两个配置参数 daemonize
和 supervised
,来设置变量 background 的值。它们的含意别离是:
参数 daemonize 示意,是否要设置 Redis 以守护过程形式运行;
参数 supervised 示意,是否应用 upstart 或是 systemd 这两种守护过程的管理程序来治理 Redis。
/**
*
*/
int main(int argc, char **argv) {server.supervised = redisIsSupervised(server.supervised_mode);
//
int background = server.daemonize && !server.supervised;
// 如果 background 值为 1(true),则调用 daemonize 函数。if (background) daemonize();
serverLog(LL_WARNING, "oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo");
serverLog(LL_WARNING,
"Redis version=%s, bits=%d, commit=%s, modified=%d, pid=%d, just started",
REDIS_VERSION,
(sizeof(long) == 8) ? 64 : 32,
redisGitSHA1(),
strtol(redisGitDirty(),NULL,10) > 0,
(int)getpid());
}
void daemonize(void) {
int fd;
// fork 胜利执行或失败,则父过程退出
if (fork() != 0) exit(0); /* parent exits */
setsid(); // 创立新的 session /* create a new session */
/* Every output goes to /dev/null. If Redis is daemonized but
* the 'logfile' is set to 'stdout' in the configuration file
* it will not log at all. */
if ((fd = open("/dev/null", O_RDWR, 0)) != -1) {dup2(fd, STDIN_FILENO);
dup2(fd, STDOUT_FILENO);
dup2(fd, STDERR_FILENO);
if (fd > STDERR_FILENO) close(fd);
}
}
(3) Redis 后盾线程
main 函数在初始化过程最初调用的 InitServerLast
函数。
InitServerLast
函数的作用是进一步调用 bioInit 函数,来创立后盾线程,让 Redis 把局部工作交给后盾线程解决。
int main(int argc, char **argv) {
//
InitServerLast();}
/* Some steps in server initialization need to be done last (after modules
* are loaded).
* Specifically, creation of threads due to a race bug in ld.so, in which
* Thread Local Storage initialization collides with dlopen call.
* see: https://sourceware.org/bugzilla/show_bug.cgi?id=19329 */
void InitServerLast() {bioInit();
initThreadedIO();
set_jemalloc_bg_thread(server.jemalloc_bg_thread);
server.initial_memory_usage = zmalloc_used_memory();}
/* Initialize the background system, spawning the thread. */
void bioInit(void) {
pthread_attr_t attr;
pthread_t thread;
size_t stacksize;
int j;
/* Initialization of state vars and objects */
for (j = 0; j < BIO_NUM_OPS; j++) {
// 初始化互斥锁数组
pthread_mutex_init(&bio_mutex[j],NULL);
// 初始化条件变量数组
pthread_cond_init(&bio_newjob_cond[j],NULL);
pthread_cond_init(&bio_step_cond[j],NULL);
// bio_jobs 构造体类型,用来示意后台任务
bio_jobs[j] = listCreate();
// 每种工作中,处于期待状态的工作个数
bio_pending[j] = 0;
}
/* Set the stack size as by default it may be small in some system */
pthread_attr_init(&attr);
pthread_attr_getstacksize(&attr,&stacksize);
if (!stacksize) stacksize = 1; /* The world is full of Solaris Fixes */
while (stacksize < REDIS_THREAD_STACK_SIZE) stacksize *= 2;
pthread_attr_setstacksize(&attr, stacksize);
/* Ready to spawn our threads. We use the single argument the thread
* function accepts in order to pass the job ID the thread is
* responsible of. */
for (j = 0; j < BIO_NUM_OPS; j++) {void *arg = (void*)(unsigned long) j;
if (pthread_create(&thread,&attr,bioProcessBackgroundJobs,arg) != 0) {serverLog(LL_WARNING,"Fatal: Can't initialize Background Jobs.");
exit(1);
}
bio_threads[j] = thread;
}
}
(3.1) 解决后台任务
bioProcessBackgroundJobs
函数
/**
*
*/
void *bioProcessBackgroundJobs(void *arg) {
struct bio_job *job;
unsigned long type = (unsigned long) arg;
sigset_t sigset;
/* Check that the type is within the right interval. */
if (type >= BIO_NUM_OPS) {
serverLog(LL_WARNING,
"Warning: bio thread started with wrong type %lu",type);
return NULL;
}
switch (type) {
case BIO_CLOSE_FILE:
redis_set_thread_title("bio_close_file");
break;
case BIO_AOF_FSYNC:
redis_set_thread_title("bio_aof_fsync");
break;
case BIO_LAZY_FREE:
redis_set_thread_title("bio_lazy_free");
break;
}
redisSetCpuAffinity(server.bio_cpulist);
makeThreadKillable();
pthread_mutex_lock(&bio_mutex[type]);
/* Block SIGALRM so we are sure that only the main thread will
* receive the watchdog signal. */
sigemptyset(&sigset);
sigaddset(&sigset, SIGALRM);
if (pthread_sigmask(SIG_BLOCK, &sigset, NULL))
serverLog(LL_WARNING,
"Warning: can't mask SIGALRM in bio.c thread: %s", strerror(errno));
while(1) {
listNode *ln;
/* The loop always starts with the lock hold. */
if (listLength(bio_jobs[type]) == 0) {pthread_cond_wait(&bio_newjob_cond[type],&bio_mutex[type]);
continue;
}
// 获取队列里的第一个工作 /* Pop the job from the queue. */
ln = listFirst(bio_jobs[type]);
job = ln->value;
/* It is now possible to unlock the background system as we know have
* a stand alone job structure to process.*/
pthread_mutex_unlock(&bio_mutex[type]);
// 判断后台任务类型是哪一种 /* Process the job accordingly to its type. */
if (type == BIO_CLOSE_FILE) { // 敞开文件工作
close((long)job->arg1); // 调用 close 函数
} else if (type == BIO_AOF_FSYNC) { // AOF 同步写工作
redis_fsync((long)job->arg1); // 调用 redis_fsync 函数
} else if (type == BIO_LAZY_FREE) { // 惰性删除工作
// 依据工作的参数别离调用不同的惰性删除函数执行
/* What we free changes depending on what arguments are set:
* arg1 -> free the object at pointer.
* arg2 & arg3 -> free two dictionaries (a Redis DB).
* only arg3 -> free the radix tree. */
if (job->arg1)
lazyfreeFreeObjectFromBioThread(job->arg1);
else if (job->arg2 && job->arg3)
lazyfreeFreeDatabaseFromBioThread(job->arg2,job->arg3);
else if (job->arg3)
lazyfreeFreeSlotsMapFromBioThread(job->arg3);
} else {serverPanic("Wrong job type in bioProcessBackgroundJobs().");
}
zfree(job);
/* Lock again before reiterating the loop, if there are no longer
* jobs to process we'll block again in pthread_cond_wait(). */
pthread_mutex_lock(&bio_mutex[type]);
// 工作执行实现后,调用 listDelNode 在工作队列中删除该工作
listDelNode(bio_jobs[type],ln);
// // 将对应的期待工作个数减一
bio_pending[type]--;
/* Unblock threads blocked on bioWaitStepOfType() if any. */
pthread_cond_broadcast(&bio_step_cond[type]);
}
}
Redis 启动了 3 个线程来执行文件敞开、AOF 同步写和惰性删除等操作
(3.2) 创立后台任务
/**
*/
void bioCreateBackgroundJob(int type, void *arg1, void *arg2, void *arg3) {
// 创立工作构造体 分配内存
struct bio_job *job = zmalloc(sizeof(*job));
// 设置工作数据结构中的参数
job->time = time(NULL);
job->arg1 = arg1;
job->arg2 = arg2;
job->arg3 = arg3;
// 线程互斥锁
pthread_mutex_lock(&bio_mutex[type]);
// 增加到队尾
listAddNodeTail(bio_jobs[type],job);
// 工作数 +1
bio_pending[type]++;
pthread_cond_signal(&bio_newjob_cond[type]);
pthread_mutex_unlock(&bio_mutex[type]);
}
Redis 过程想要启动一个后台任务时,只有调用 bioCreateBackgroundJob 函数,并设置好该工作对应的类型和参数即可。
bioCreateBackgroundJob
函数就会把创立好的工作数据结构,放到后台任务对应的队列中。
bioInit
函数在 Redis server 启动时,创立的线程会一直地轮询后台任务队列,一旦发现有工作能够执行,就会将该工作取出并执行。
这种设计形式是典型的生产者 – 消费者模型。
bioCreateBackgroundJob
函数是生产者,负责往每种工作队列中退出要执行的后台任务,
bioProcessBackgroundJobs
函数是消费者,负责从每种工作队列中取出工作来执行。
Redis 源码分析与实战 学习笔记 Day12 12 | Redis 真的是单线程吗?https://time.geekbang.org/col…