Binder之bindService

一个需要进行Binder通信的Client一般通过bindService()来启动Service。 bindService(it, mServiceConnection, Service.BIND_AUTO_CREATE);binderService()的实现在ContextWrapper中, frameworks/base/core/java/android/app/ContextImpl.java @Override public boolean bindService(Intent service, ServiceConnection conn, int flags) { //如果是系统进程,发出警告 warnIfCallingFromSystemProcess(); return bindServiceCommon(service, conn, flags, Process.myUserHandle());}...... private boolean bindServiceCommon(Intent service, ServiceConnection conn, int flags, UserHandle user) { IServiceConnection sd; ...... if (mPackageInfo != null) { // 获取IServiceConnection对象 sd = mPackageInfo.getServiceDispatcher(conn, getOuterContext(), mMainThread.getHandler(), flags); } else { throw new RuntimeException("Not supported in system context"); } // 验证service有效性 validateServiceIntent(service); try { ...... // 准备离开应用程序 service.prepareToLeaveProcess(); // 调用ActivityManagerProxy的bindService() int res = ActivityManagerNative.getDefault().bindService( mMainThread.getApplicationThread(), getActivityToken(), service, service.resolveTypeIfNeeded(getContentResolver()), sd, flags, user.getIdentifier()); ...... } }其中,getServiceDispatcher()返回一个IServiceConnection对象,它为一个Binder实体,将负责与ServiceConnection通信。 ...

October 17, 2019 · 5 min · jiezi

Binder驱动之死亡通知

在Binder通信建立后,Client端可能需要知道Server端的存活状态。当Server端挂掉时,Client端需要清理与通信相关的数据和行为,这个清理过程就是通过Binder死亡通知机制实现的。 注册死亡通知应用层通过调用BpBinder::linkToDeath()来注册死亡通知。Native Binder通信可以直接调用这个接口,Java通信需要通过Jni来调用。 frameworks/base/core/jni/android_util_Binder.cppstatic void android_os_BinderProxy_linkToDeath(JNIEnv* env, jobject obj, jobject recipient, jint flags) // throws RemoteException{ ...... // 获取BpBinder对象 IBinder* target = (IBinder*) env->GetLongField(obj, gBinderProxyOffsets.mObject); ...... // 只有远程传输需要注册死亡通知 if (!target->localBinder()) { // 获取死亡通知队列 DeathRecipientList* list = (DeathRecipientList*) env->GetLongField(obj, gBinderProxyOffsets.mOrgue); // 创建JavaDeathRecipient对象,将其加入到死亡回收队列 sp<JavaDeathRecipient> jdr = new JavaDeathRecipient(env, recipient, list); // 调用BpBinder::linkToDeath()来建立死亡通知 status_t err = target->linkToDeath(jdr, NULL, flags); ...... } }处理死亡通知的是JavaDeathRecipient,它继承IBinder::DeathRecipient,接收到死亡通知时会回调binderDied()。 frameworks/base/core/jni/android_util_Binder.cppclass JavaDeathRecipient : public IBinder::DeathRecipient{public: JavaDeathRecipient(JNIEnv* env, jobject object, const sp<DeathRecipientList>& list) : mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object)), mObjectWeak(NULL), mList(list) { ...... // 加入到回收队列 list->add(this); android_atomic_inc(&gNumDeathRefs); // 增加对象引用计数,当创建对象个数达到200时强行出发GC incRefsCreated(env); } void binderDied(const wp<IBinder>& who) { if (mObject != NULL) { JNIEnv* env = javavm_to_jnienv(mVM); // 调用Java的sendDeathNotice方法 env->CallStaticVoidMethod(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mSendDeathNotice, mObject); ...... // 释放全局引用,增加弱引用,以便GC回收 mObjectWeak = env->NewWeakGlobalRef(mObject); env->DeleteGlobalRef(mObject); mObject = NULL; } }Java的注册过程最终也是指向到BpBinder中。死亡通知的注销也同样在这里,一起看一下。 ...

September 30, 2019 · 4 min · jiezi

Binder机制情景分析之深入驱动

一. 概述看过上篇C服务应用篇内容你肯定已经了解binder的一个使用过程,但是肯定还会有很多疑问:service注册服务是怎么和ServiceManager联系上的?client是怎么根据服务名找到的service进程?client获取的handle和service注册到ServiceManager的handle是否相同?client通过handle是怎么调用的服务?这篇开始结合binder驱动进行数据交互的分析;1.1 驱动中重要的数据结构数据结构说明binder_proc每个使用open打开binder设备文件的进程都会在驱动中创建一个binder_proc的结构, 用来记录该<bar>进程的各种信息和状态.例如:线程表,binder节点表,节点引用表binder_thread每个binder线程在binder驱动中都有一个对应的binder_thread结构.记录了线程相关的信息,例如需要完成的任务等.binder_nodebindder_proc 中有一张binder节点对象表,表项是binder_node结构.binder_refbinder_proc还有一张节点引用表,表象是binder_ref结构. 保存所引用对象的binder_node指针.binder_buffer驱动通过mmap的方式创建了一块大的缓存区,每次binder传输数据,会在缓存区分配一个binder_buffer的结构来保存数据.1.2 说明先讲解关于binder应用层调用binder_open()相关的公用流程的驱动代码;二. binder初始化-公共2.1 binder_open应用层open了binder驱动,对应驱动层代码如下:static int binder_open(struct inode *nodp, struct file *filp){ struct binder_proc *proc; binder_debug(BINDER_DEBUG_OPEN_CLOSE, “binder_open: %d:%d\n”, current->group_leader->pid, current->pid); proc = kzalloc(sizeof(*proc), GFP_KERNEL); ① if (proc == NULL) return -ENOMEM; get_task_struct(current); ② proc->tsk = current; INIT_LIST_HEAD(&proc->todo); ③ init_waitqueue_head(&proc->wait); proc->default_priority = task_nice(current); binder_lock(func); binder_stats_created(BINDER_STAT_PROC); hlist_add_head(&proc->proc_node, &binder_procs); ④ proc->pid = current->group_leader->pid; INIT_LIST_HEAD(&proc->delivered_death); filp->private_data = proc; ⑤ binder_unlock(func); if (binder_debugfs_dir_entry_proc) { char strbuf[11]; snprintf(strbuf, sizeof(strbuf), “%u”, proc->pid); proc->debugfs_entry = debugfs_create_file(strbuf, S_IRUGO, binder_debugfs_dir_entry_proc, proc, &binder_proc_fops); } return 0;}①: 为当前进程分配一个struct binder_proc宽度空间给proc; ②: 获取当前进程的task结构; ③: 初始化binder_proc中的todo链表; ④: 将当前进程的binder_proc插入全局变量binder_procs中; ⑤: 将proc保存到文件结构中,供下次调用使用;binder_procs这是一个全局的红黑树变量,该全局变量在binder驱动的最前方使用 static HLIST_HEAD(binder_procs);进行的初始化; binder_open()函数的主要功能是打开binder驱动的设备文件,为当前进程创建和初始化binder_proc结构体proc.将proc插入到全局的红黑树binder_procs中,供将来查找用; 同时变量proc还放到file结构的private_data字段中,调用驱动的其他操作时可从file结构中取出代表当前进程的binder_proc结构体使用;2.2 binder_ioctl应用层调用ioctl时对应的驱动层代码:static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg){ int ret; struct binder_proc *proc = filp->private_data; struct binder_thread *thread; unsigned int size = _IOC_SIZE(cmd); void __user *ubuf = (void __user *)arg; /pr_info(“binder_ioctl: %d:%d %x %lx\n”, proc->pid, current->pid, cmd, arg);/ trace_binder_ioctl(cmd, arg); ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2); if (ret) goto err_unlocked; binder_lock(func); thread = binder_get_thread(proc); ① if (thread == NULL) { ret = -ENOMEM; goto err; } switch (cmd) { case BINDER_WRITE_READ: ret = binder_ioctl_write_read(filp, cmd, arg, thread); if (ret) goto err; break; case BINDER_SET_MAX_THREADS: if (copy_from_user(&proc->max_threads, ubuf, sizeof(proc->max_threads))) { ret = -EINVAL; goto err; } break; case BINDER_SET_CONTEXT_MGR: ret = binder_ioctl_set_ctx_mgr(filp); if (ret) goto err; break; case BINDER_THREAD_EXIT: binder_debug(BINDER_DEBUG_THREADS, “%d:%d exit\n”, proc->pid, thread->pid); binder_free_thread(proc, thread); thread = NULL; break; case BINDER_VERSION: { struct binder_version __user *ver = ubuf; if (size != sizeof(struct binder_version)) { ret = -EINVAL; goto err; } if (put_user(BINDER_CURRENT_PROTOCOL_VERSION, &ver->protocol_version)) { ret = -EINVAL; goto err; } break; } default: ret = -EINVAL; goto err; } ret = 0;err: if (thread) thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN; binder_unlock(func); wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2); if (ret && ret != -ERESTARTSYS) pr_info("%d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);err_unlocked: trace_binder_ioctl_done(ret); return ret;}获取binder版本号很简单,发BINDER_VERSION命令给驱动,驱动回复BINDER_CURRENT_PROTOCOL_VERSION给用户空间; 这里讲下各个命令的意义:命令含义数据格式BINDER_WRITE_READ向驱动读取和写入数据.可同时读和写struct binder_write_readBINDER_SET_MAX_THREADS设置线程池的最大的线程数,达到上限后驱动将不会在通知应用层启动新线程size_tBINDER_SET_CONTEXT_MGR将本进程设置为binder系统的管理进程,只有servicemanager进程才会使用这个命令且只能调用一次intBINDER_THREAD_EXIT通知驱动当前线程要退出了,以便驱动清理该线程相关的数据intBINDER_VERSION获取binder的版本号struct binder_version但是这有个注意点:①: 第一次调用ioctl时会为该进程创建一个线程;2.3 binder_get_threadstatic struct binder_thread *binder_get_thread(struct binder_proc *proc){ struct binder_thread *thread = NULL; struct rb_node *parent = NULL; struct rb_node **p = &proc->threads.rb_node; while (*p) { ① parent = *p; thread = rb_entry(parent, struct binder_thread, rb_node); if (current->pid < thread->pid) p = &(*p)->rb_left; else if (current->pid > thread->pid) p = &(*p)->rb_right; else break; } if (*p == NULL) { thread = kzalloc(sizeof(*thread), GFP_KERNEL); ② if (thread == NULL) return NULL; binder_stats_created(BINDER_STAT_THREAD); thread->proc = proc; thread->pid = current->pid; init_waitqueue_head(&thread->wait); ③ INIT_LIST_HEAD(&thread->todo); rb_link_node(&thread->rb_node, parent, p); rb_insert_color(&thread->rb_node, &proc->threads); ④ thread->looper |= BINDER_LOOPER_STATE_NEED_RETURN; thread->return_error = BR_OK; thread->return_error2 = BR_OK; } return thread;}该函数的主要功能是从当前进程信息表中找到挂在下面的线程,struct binder_thread是用挂在进程信息表下threads节点的红黑树链表下;①: 先遍历threads节点的红黑树链表; ②: 如果没有查找到,则分配一个struct binder_thread长度的空间; ③: 初始化等待队列头节点和thread的todo链表; ④: 将该线程插入到进程的threads节点;首先先遍历该链表,如果为找到线程信息则创建一个binder_thread线程,接着初始化线程等待队列和线程的todo链表,再将该线程节点挂在进程的信息表中的threads节点的红黑树中;2.4 binder_mmapstatic int binder_mmap(struct file *filp, struct vm_area_struct *vma){ int ret; struct vm_struct *area; struct binder_proc *proc = filp->private_data; ① const char *failure_string; struct binder_buffer *buffer; if (proc->tsk != current) return -EINVAL; if ((vma->vm_end - vma->vm_start) > SZ_4M) vma->vm_end = vma->vm_start + SZ_4M; binder_debug(BINDER_DEBUG_OPEN_CLOSE, “binder_mmap: %d %lx-%lx (%ld K) vma %lx pagep %lx\n”, proc->pid, vma->vm_start, vma->vm_end, (vma->vm_end - vma->vm_start) / SZ_1K, vma->vm_flags, (unsigned long)pgprot_val(vma->vm_page_prot)); if (vma->vm_flags & FORBIDDEN_MMAP_FLAGS) { ret = -EPERM; failure_string = “bad vm_flags”; goto err_bad_arg; } vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE; mutex_lock(&binder_mmap_lock); if (proc->buffer) { ret = -EBUSY; failure_string = “already mapped”; goto err_already_mapped; } area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP); ② if (area == NULL) { ret = -ENOMEM; failure_string = “get_vm_area”; goto err_get_vm_area_failed; } proc->buffer = area->addr; ③ proc->user_buffer_offset = vma->vm_start - (uintptr_t)proc->buffer; mutex_unlock(&binder_mmap_lock);#ifdef CONFIG_CPU_CACHE_VIPT if (cache_is_vipt_aliasing()) { while (CACHE_COLOUR((vma->vm_start ^ (uint32_t)proc->buffer))) { pr_info(“binder_mmap: %d %lx-%lx maps %p bad alignment\n”, proc->pid, vma->vm_start, vma->vm_end, proc->buffer); vma->vm_start += PAGE_SIZE; } }#endif proc->pages = kzalloc(sizeof(proc->pages[0]) * ((vma->vm_end - vma->vm_start) / PAGE_SIZE), GFP_KERNEL); ④ if (proc->pages == NULL) { ret = -ENOMEM; failure_string = “alloc page array”; goto err_alloc_pages_failed; } proc->buffer_size = vma->vm_end - vma->vm_start; vma->vm_ops = &binder_vm_ops; vma->vm_private_data = proc; if (binder_update_page_range(proc, 1, proc->buffer, proc->buffer + PAGE_SIZE, vma)) { ⑤ ret = -ENOMEM; failure_string = “alloc small buf”; goto err_alloc_small_buf_failed; } buffer = proc->buffer; INIT_LIST_HEAD(&proc->buffers); list_add(&buffer->entry, &proc->buffers); ⑥ buffer->free = 1; binder_insert_free_buffer(proc, buffer); proc->free_async_space = proc->buffer_size / 2; barrier(); proc->files = get_files_struct(current); proc->vma = vma; proc->vma_vm_mm = vma->vm_mm; /pr_info(“binder_mmap: %d %lx-%lx maps %p\n”, proc->pid, vma->vm_start, vma->vm_end, proc->buffer);/ return 0;err_alloc_small_buf_failed: kfree(proc->pages); proc->pages = NULL;err_alloc_pages_failed: mutex_lock(&binder_mmap_lock); vfree(proc->buffer); proc->buffer = NULL;err_get_vm_area_failed:err_already_mapped: mutex_unlock(&binder_mmap_lock);err_bad_arg: pr_err(“binder_mmap: %d %lx-%lx %s failed %d\n”, proc->pid, vma->vm_start, vma->vm_end, failure_string, ret); return ret;}①: filp->private_data保存了我们open设备时创建的binder_proc信息; ②: 为用户进程分配一块内核空间作为缓冲区; ③: 把分配的缓冲区指针存放到binder_proc的buffer字段; ④: 分配pages空间; ④: 在内核分配一块同样页数的内核空间,并把它的物理内存和前面为用户进程分配的内存地址关联; ⑤: 将刚才分配的内存块加入用户进程内存链表;binder_mmap函数首先调用get_vm_area()分配一块地址空间,这里创建的为虚拟内存,位于用户进程空间,接着调用binder_update_page_range()建立虚拟内存到物理内存的映射,这样用户空间和内核空间就能共享一块空间了; binder运用了mmap机制,在进程间的数据传输时就减小了拷贝次数; 如果不用mmap,从发送进程拷贝到内核空间调用一次copy_from_user,从内核空间到目标进程又需要调用copy_to_user,这样就发生两次数据拷贝.但运用了mmap后,只需要把发送的进程用户空间数据拷贝到发送进程的内核空间调用一次copy_from_user,因为目标进程内核空间缓存区和发送进程内核空间的缓冲区是共享;三. ServiceManager3.1 注册为Manager3.1.1 binder_ioctl_set_ctx_mgrstatic int binder_ioctl_set_ctx_mgr(struct file *filp){ int ret = 0; struct binder_proc *proc = filp->private_data; kuid_t curr_euid = current_euid(); if (binder_context_mgr_node != NULL) { pr_err(“BINDER_SET_CONTEXT_MGR already set\n”); ret = -EBUSY; goto out; } ret = security_binder_set_context_mgr(proc->tsk); if (ret < 0) goto out; if (uid_valid(binder_context_mgr_uid)) { if (!uid_eq(binder_context_mgr_uid, curr_euid)) { pr_err(“BINDER_SET_CONTEXT_MGR bad uid %d != %d\n”, from_kuid(&init_user_ns, curr_euid), from_kuid(&init_user_ns, binder_context_mgr_uid)); ret = -EPERM; goto out; } } else { binder_context_mgr_uid = curr_euid; ① } binder_context_mgr_node = binder_new_node(proc, 0, 0); ② if (binder_context_mgr_node == NULL) { ret = -ENOMEM; goto out; } binder_context_mgr_node->local_weak_refs++; binder_context_mgr_node->local_strong_refs++; binder_context_mgr_node->has_strong_ref = 1; binder_context_mgr_node->has_weak_ref = 1;out: return ret;}①: 保存当前进程的用户id到全局变量binder_context_mgr_uid; ②: 为当前进程创建一个binder_node节点,保存到全局变量binder_context_mgr_node;3.2 进入循环调用ioctl函数写入BC_ENTER_LOOPER命令给驱动,进入循环; 当调用ioctl函数时命令为BINDER_WRITE_READ则调用下面函数:3.2.1 binder_ioctl_write_readstatic int binder_ioctl_write_read(struct file *filp, unsigned int cmd, unsigned long arg, struct binder_thread *thread){ int ret = 0; struct binder_proc *proc = filp->private_data; unsigned int size = _IOC_SIZE(cmd); void __user *ubuf = (void __user )arg; struct binder_write_read bwr; if (size != sizeof(struct binder_write_read)) { ret = -EINVAL; goto out; } if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { ① ret = -EFAULT; goto out; } binder_debug(BINDER_DEBUG_READ_WRITE, “%d:%d write %lld at %016llx, read %lld at %016llx\n”, proc->pid, thread->pid, (u64)bwr.write_size, (u64)bwr.write_buffer, (u64)bwr.read_size, (u64)bwr.read_buffer); if (bwr.write_size > 0) { ② ret = binder_thread_write(proc, thread, bwr.write_buffer, bwr.write_size, &bwr.write_consumed); trace_binder_write_done(ret); if (ret < 0) { bwr.read_consumed = 0; if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto out; } } if (bwr.read_size > 0) { ret = binder_thread_read(proc, thread, bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK); trace_binder_read_done(ret); if (!list_empty(&proc->todo)) wake_up_interruptible(&proc->wait); if (ret < 0) { if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto out; } } binder_debug(BINDER_DEBUG_READ_WRITE, “%d:%d wrote %lld of %lld, read return %lld of %lld\n”, proc->pid, thread->pid, (u64)bwr.write_consumed, (u64)bwr.write_size, (u64)bwr.read_consumed, (u64)bwr.read_size); if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { ret = -EFAULT; goto out; }out: return ret;}①: 将用户空间的数据拷贝到内核空间; ②: 这里可以看到驱动判断读写是根据读写buf的size来分辨且读写操作互不干扰;struct binder_write_read { binder_size_t write_size; / bytes to write / binder_size_t write_consumed; / bytes consumed by driver / binder_uintptr_t write_buffer; binder_size_t read_size; / bytes to read / binder_size_t read_consumed; / bytes consumed by driver */ binder_uintptr_t read_buffer;};这个结构体数据很简单,一般只要填写size和buf指针就可以,buf的数据个数C服务应用篇有介绍;3.2.2 binder_thread_write因为binder_thread_write()太长了所以每次用到了哪个命令再细讲;3.2.2.1 BC_ENTER_LOOPER case BC_ENTER_LOOPER: binder_debug(BINDER_DEBUG_THREADS, “%d:%d BC_ENTER_LOOPER\n”, proc->pid, thread->pid); if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) { thread->looper |= BINDER_LOOPER_STATE_INVALID; binder_user_error("%d:%d ERROR: BC_ENTER_LOOPER called after BC_REGISTER_LOOPER\n", proc->pid, thread->pid); } thread->looper |= BINDER_LOOPER_STATE_ENTERED; break;这个命令有解释过告诉该线程进入循环状态,这个线程就是前面第一次调用ioctl创建的struct binder_thread结构的线程;3.2.3 binder_thread_read接下来进入for循环后,又调用了一次ioctl进行读操作static int binder_thread_read(struct binder_proc *proc, struct binder_thread *thread, binder_uintptr_t binder_buffer, size_t size, binder_size_t *consumed, int non_block){ void __user *buffer = (void __user *)(uintptr_t)binder_buffer; void __user *ptr = buffer + *consumed; void __user *end = buffer + size; int ret = 0; int wait_for_proc_work; if (*consumed == 0) { ① if (put_user(BR_NOOP, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); }retry: wait_for_proc_work = thread->transaction_stack == NULL && ② list_empty(&thread->todo); if (thread->return_error != BR_OK && ptr < end) { if (thread->return_error2 != BR_OK) { if (put_user(thread->return_error2, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); binder_stat_br(proc, thread, thread->return_error2); if (ptr == end) goto done; thread->return_error2 = BR_OK; } if (put_user(thread->return_error, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); binder_stat_br(proc, thread, thread->return_error); thread->return_error = BR_OK; goto done; } thread->looper |= BINDER_LOOPER_STATE_WAITING; if (wait_for_proc_work) ③ proc->ready_threads++; binder_unlock(func); trace_binder_wait_for_work(wait_for_proc_work, !!thread->transaction_stack, !list_empty(&thread->todo)); if (wait_for_proc_work) { if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED | ④ BINDER_LOOPER_STATE_ENTERED))) { binder_user_error("%d:%d ERROR: Thread waiting for process work before calling BC_REGISTER_LOOPER or BC_ENTER_LOOPER (state %x)\n", proc->pid, thread->pid, thread->looper); wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2); } binder_set_nice(proc->default_priority); if (non_block) { if (!binder_has_proc_work(proc, thread)) ret = -EAGAIN; } else ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread)); ⑤ } else { if (non_block) { if (!binder_has_thread_work(thread)) ret = -EAGAIN; } else ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread)); } binder_lock(func); if (wait_for_proc_work) ⑥ proc->ready_threads–; thread->looper &= ~BINDER_LOOPER_STATE_WAITING; if (ret) return ret; while (1) { uint32_t cmd; struct binder_transaction_data tr; struct binder_work *w; struct binder_transaction t = NULL; if (!list_empty(&thread->todo)) { ⑦ w = list_first_entry(&thread->todo, struct binder_work, entry); } else if (!list_empty(&proc->todo) && wait_for_proc_work) { ⑧ w = list_first_entry(&proc->todo, struct binder_work, entry); } else { / no data added */ if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) goto retry; break; } if (end - ptr < sizeof(tr) + 4) break; switch (w->type) { case BINDER_WORK_TRANSACTION: { t = container_of(w, struct binder_transaction, work); ⑨ } break; case BINDER_WORK_TRANSACTION_COMPLETE: { cmd = BR_TRANSACTION_COMPLETE; if (put_user(cmd, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); binder_stat_br(proc, thread, cmd); binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE, “%d:%d BR_TRANSACTION_COMPLETE\n”, proc->pid, thread->pid); list_del(&w->entry); ⑩ kfree(w); binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE); } break; case BINDER_WORK_NODE: { struct binder_node *node = container_of(w, struct binder_node, work); uint32_t cmd = BR_NOOP; const char *cmd_name; int strong = node->internal_strong_refs || node->local_strong_refs; int weak = !hlist_empty(&node->refs) || node->local_weak_refs || strong; if (weak && !node->has_weak_ref) { cmd = BR_INCREFS; cmd_name = “BR_INCREFS”; node->has_weak_ref = 1; node->pending_weak_ref = 1; node->local_weak_refs++; } else if (strong && !node->has_strong_ref) { cmd = BR_ACQUIRE; cmd_name = “BR_ACQUIRE”; node->has_strong_ref = 1; node->pending_strong_ref = 1; node->local_strong_refs++; } else if (!strong && node->has_strong_ref) { cmd = BR_RELEASE; cmd_name = “BR_RELEASE”; node->has_strong_ref = 0; } else if (!weak && node->has_weak_ref) { cmd = BR_DECREFS; cmd_name = “BR_DECREFS”; node->has_weak_ref = 0; } if (cmd != BR_NOOP) { if (put_user(cmd, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); if (put_user(node->ptr, (binder_uintptr_t __user *)ptr)) return -EFAULT; ptr += sizeof(binder_uintptr_t); if (put_user(node->cookie, (binder_uintptr_t __user *)ptr)) return -EFAULT; ptr += sizeof(binder_uintptr_t); binder_stat_br(proc, thread, cmd); binder_debug(BINDER_DEBUG_USER_REFS, “%d:%d %s %d u%016llx c%016llx\n”, proc->pid, thread->pid, cmd_name, node->debug_id, (u64)node->ptr, (u64)node->cookie); } else { list_del_init(&w->entry); if (!weak && !strong) { binder_debug(BINDER_DEBUG_INTERNAL_REFS, “%d:%d node %d u%016llx c%016llx deleted\n”, proc->pid, thread->pid, node->debug_id, (u64)node->ptr, (u64)node->cookie); rb_erase(&node->rb_node, &proc->nodes); kfree(node); binder_stats_deleted(BINDER_STAT_NODE); } else { binder_debug(BINDER_DEBUG_INTERNAL_REFS, “%d:%d node %d u%016llx c%016llx state unchanged\n”, proc->pid, thread->pid, node->debug_id, (u64)node->ptr, (u64)node->cookie); } } } break; case BINDER_WORK_DEAD_BINDER: case BINDER_WORK_DEAD_BINDER_AND_CLEAR: case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: { struct binder_ref_death *death; uint32_t cmd; death = container_of(w, struct binder_ref_death, work); if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) cmd = BR_CLEAR_DEATH_NOTIFICATION_DONE; else cmd = BR_DEAD_BINDER; if (put_user(cmd, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); if (put_user(death->cookie, (binder_uintptr_t __user )ptr)) return -EFAULT; ptr += sizeof(binder_uintptr_t); binder_stat_br(proc, thread, cmd); binder_debug(BINDER_DEBUG_DEATH_NOTIFICATION, “%d:%d %s %016llx\n”, proc->pid, thread->pid, cmd == BR_DEAD_BINDER ? “BR_DEAD_BINDER” : “BR_CLEAR_DEATH_NOTIFICATION_DONE”, (u64)death->cookie); if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) { list_del(&w->entry); kfree(death); binder_stats_deleted(BINDER_STAT_DEATH); } else list_move(&w->entry, &proc->delivered_death); if (cmd == BR_DEAD_BINDER) goto done; / DEAD_BINDER notifications can cause transactions */ } break; } if (!t) continue; BUG_ON(t->buffer == NULL); if (t->buffer->target_node) { struct binder_node *target_node = t->buffer->target_node; tr.target.ptr = target_node->ptr; tr.cookie = target_node->cookie; t->saved_priority = task_nice(current); if (t->priority < target_node->min_priority && !(t->flags & TF_ONE_WAY)) binder_set_nice(t->priority); else if (!(t->flags & TF_ONE_WAY) || t->saved_priority > target_node->min_priority) binder_set_nice(target_node->min_priority); cmd = BR_TRANSACTION; ①① } else { tr.target.ptr = 0; tr.cookie = 0; cmd = BR_REPLY; } tr.code = t->code; ①② tr.flags = t->flags; tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid); if (t->from) { struct task_struct *sender = t->from->proc->tsk; tr.sender_pid = task_tgid_nr_ns(sender, task_active_pid_ns(current)); } else { tr.sender_pid = 0; } tr.data_size = t->buffer->data_size; tr.offsets_size = t->buffer->offsets_size; tr.data.ptr.buffer = (binder_uintptr_t)( (uintptr_t)t->buffer->data + proc->user_buffer_offset); tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *)); if (put_user(cmd, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); if (copy_to_user(ptr, &tr, sizeof(tr))) ①③ return -EFAULT; ptr += sizeof(tr); trace_binder_transaction_received(t); binder_stat_br(proc, thread, cmd); binder_debug(BINDER_DEBUG_TRANSACTION, “%d:%d %s %d %d:%d, cmd %d size %zd-%zd ptr %016llx-%016llx\n”, proc->pid, thread->pid, (cmd == BR_TRANSACTION) ? “BR_TRANSACTION” : “BR_REPLY”, t->debug_id, t->from ? t->from->proc->pid : 0, t->from ? t->from->pid : 0, cmd, t->buffer->data_size, t->buffer->offsets_size, (u64)tr.data.ptr.buffer, (u64)tr.data.ptr.offsets); list_del(&t->work.entry); t->buffer->allow_user_free = 1; if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) { t->to_parent = thread->transaction_stack; t->to_thread = thread; thread->transaction_stack = t; } else { t->buffer->transaction = NULL; kfree(t); binder_stats_deleted(BINDER_STAT_TRANSACTION); } break; }done: consumed = ptr - buffer; ①④ if (proc->requested_threads + proc->ready_threads == 0 && proc->requested_threads_started < proc->max_threads && (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) / the user-space code fails to */ /*spawn a new thread if we leave this out */) { proc->requested_threads++; binder_debug(BINDER_DEBUG_THREADS, “%d:%d BR_SPAWN_LOOPER\n”, proc->pid, thread->pid); if (put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer)) return -EFAULT; binder_stat_br(proc, thread, BR_SPAWN_LOOPER); } return 0;}①: 先判断readbuf结构传进来的值是否为0,为0则返回一个BR_NOOP到用户空间,但是此时用户空间线程是睡眠的;②: 如果当前线程的todo链表为空且传送数据栈无数据时,则表示当前进程空闲; ③: 如果当前进程在writeBC_REGISTER_LOOPER or BC_ENTER_LOOPER前就开始执行读操作,则进入休眠; ④: 进入休眠,唤醒后会检测binder_has_proc_work,当前进程是否工作(判断当前进程的todo链表是否为空和thread的looper状态),未工作则继续休眠; ⑤: 进入休眠,唤醒后会检测当前线程是否工作; ⑥: 能走到这说明已经唤醒了,进程的ready线程计数减一,且线程的looper等待状态清除; ⑦: 先查看线程的todo链表中是否有要执行的工作; ⑧: 再检查进程的todo链表中是否有需要执行的工作; ⑨: 通过binder_work节点找到struct binder_transaction结构体地址; ⑩:如果工作类型是TRANSACTION_COMPLETE则表示工作已经执行完了,可以将此工作从线程或进程的todo链表中删除; ①①: 回复命令BR_TRANSACTION或BR_REPLY; ①②: 填充回复的数据; ①③: 将tr的数据拷贝到用户空间,ptr指针指向的是用户空间的那个readbuf; ①④: 最后consumed中保存了此次回复数据的长度;注意看第十三点: tr.data.ptr.buffer = (binder_uintptr_t)( (uintptr_t)t->buffer->data + proc->user_buffer_offset);拷贝个体用户空间的仅仅是数据buf地址,因为使用了mmap,用户空间可以直接使用这块内存,这里也体现了拷贝一次的效率; 这里你可以发现read到的数据格式一般都是BR_NOOP+CMD+数据+CMD+数据….; ServiceManager在刚进入循环开始第一次读操作时,没有其他线程就绪,此时只是返回一个BR_NOOP就开始休眠了;3.3 总结流程四. led_control_service4.1 注册服务调用led_control_server.c中:svcmgr_publish(bs, svcmgr, LED_CONTROL_SERVER_NAME, led_control);注册为一个服务者,这里面主要调用了binder_call()写入了BC_TRANSACTION命令,详细的流程C服务应用篇已经写过了,现在就主要结合驱动分析; 这里需要注意的是binder_call()调用是同时读写binder驱动的,先看下写操作再看读操作;4.1.1 binder_thread_write4.1.1.1 BC_TRANSACTION case BC_TRANSACTION: case BC_REPLY: { struct binder_transaction_data tr; if (copy_from_user(&tr, ptr, sizeof(tr))) return -EFAULT; ptr += sizeof(tr); binder_transaction(proc, thread, &tr, cmd == BC_REPLY); break; }先将用户空间write_buffer数据拷贝到内核的struct binder_transaction_data tr中, 再调用binder_transaction()处理这些数据;4.1.1.2 binder_transaction这个函数巨长….static void binder_transaction(struct binder_proc *proc, struct binder_thread *thread, struct binder_transaction_data *tr, int reply){ struct binder_transaction *t; struct binder_work *tcomplete; binder_size_t *offp, *off_end; struct binder_proc *target_proc; struct binder_thread *target_thread = NULL; struct binder_node *target_node = NULL; struct list_head *target_list; wait_queue_head_t *target_wait; struct binder_transaction *in_reply_to = NULL; struct binder_transaction_log_entry *e; uint32_t return_error; e = binder_transaction_log_add(&binder_transaction_log); e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY); e->from_proc = proc->pid; e->from_thread = thread->pid; e->target_handle = tr->target.handle; e->data_size = tr->data_size; e->offsets_size = tr->offsets_size; if (reply) { …… } else { if (tr->target.handle) { struct binder_ref *ref; ref = binder_get_ref(proc, tr->target.handle); ① if (ref == NULL) { binder_user_error("%d:%d got transaction to invalid handle\n", proc->pid, thread->pid); return_error = BR_FAILED_REPLY; goto err_invalid_target_handle; } target_node = ref->node; } else { target_node = binder_context_mgr_node; ② if (target_node == NULL) { return_error = BR_DEAD_REPLY; goto err_no_context_mgr_node; } } e->to_node = target_node->debug_id; target_proc = target_node->proc; ③ if (target_proc == NULL) { return_error = BR_DEAD_REPLY; goto err_dead_binder; } if (security_binder_transaction(proc->tsk, target_proc->tsk) < 0) { return_error = BR_FAILED_REPLY; goto err_invalid_target_handle; } if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) { ④ struct binder_transaction tmp; tmp = thread->transaction_stack; if (tmp->to_thread != thread) { binder_user_error("%d:%d got new transaction with bad transaction stack, transaction %d has target %d:%d\n", proc->pid, thread->pid, tmp->debug_id, tmp->to_proc ? tmp->to_proc->pid : 0, tmp->to_thread ? tmp->to_thread->pid : 0); return_error = BR_FAILED_REPLY; goto err_bad_call_stack; } while (tmp) { if (tmp->from && tmp->from->proc == target_proc) ⑤ target_thread = tmp->from; tmp = tmp->from_parent; } } } if (target_thread) { e->to_thread = target_thread->pid; target_list = &target_thread->todo; target_wait = &target_thread->wait; } else { target_list = &target_proc->todo; target_wait = &target_proc->wait; } e->to_proc = target_proc->pid; / TODO: reuse incoming transaction for reply */ t = kzalloc(sizeof(*t), GFP_KERNEL); if (t == NULL) { return_error = BR_FAILED_REPLY; goto err_alloc_t_failed; } binder_stats_created(BINDER_STAT_TRANSACTION); tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL); if (tcomplete == NULL) { return_error = BR_FAILED_REPLY; goto err_alloc_tcomplete_failed; } binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE); t->debug_id = ++binder_last_id; e->debug_id = t->debug_id; if (reply) binder_debug(BINDER_DEBUG_TRANSACTION, “%d:%d BC_REPLY %d -> %d:%d, data %016llx-%016llx size %lld-%lld\n”, proc->pid, thread->pid, t->debug_id, target_proc->pid, target_thread->pid, (u64)tr->data.ptr.buffer, (u64)tr->data.ptr.offsets, (u64)tr->data_size, (u64)tr->offsets_size); else binder_debug(BINDER_DEBUG_TRANSACTION, “%d:%d BC_TRANSACTION %d -> %d - node %d, data %016llx-%016llx size %lld-%lld\n”, proc->pid, thread->pid, t->debug_id, target_proc->pid, target_node->debug_id, (u64)tr->data.ptr.buffer, (u64)tr->data.ptr.offsets, (u64)tr->data_size, (u64)tr->offsets_size); if (!reply && !(tr->flags & TF_ONE_WAY)) t->from = thread; else t->from = NULL; t->sender_euid = task_euid(proc->tsk); t->to_proc = target_proc; t->to_thread = target_thread; t->code = tr->code; t->flags = tr->flags; t->priority = task_nice(current); trace_binder_transaction(reply, t, target_node); t->buffer = binder_alloc_buf(target_proc, tr->data_size, ⑥ tr->offsets_size, !reply && (t->flags & TF_ONE_WAY)); if (t->buffer == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_alloc_buf_failed; } t->buffer->allow_user_free = 0; t->buffer->debug_id = t->debug_id; t->buffer->transaction = t; t->buffer->target_node = target_node; trace_binder_transaction_alloc_buf(t->buffer); if (target_node) binder_inc_node(target_node, 1, 0, NULL); offp = (binder_size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *))); if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t) tr->data.ptr.buffer, tr->data_size)) { binder_user_error("%d:%d got transaction with invalid data ptr\n", proc->pid, thread->pid); return_error = BR_FAILED_REPLY; goto err_copy_data_failed; } if (copy_from_user(offp, (const void __user *)(uintptr_t) tr->data.ptr.offsets, tr->offsets_size)) { binder_user_error("%d:%d got transaction with invalid offsets ptr\n", proc->pid, thread->pid); return_error = BR_FAILED_REPLY; goto err_copy_data_failed; } if (!IS_ALIGNED(tr->offsets_size, sizeof(binder_size_t))) { binder_user_error("%d:%d got transaction with invalid offsets size, %lld\n", proc->pid, thread->pid, (u64)tr->offsets_size); return_error = BR_FAILED_REPLY; goto err_bad_offset; } off_end = (void *)offp + tr->offsets_size; for (; offp < off_end; offp++) { struct flat_binder_object *fp; if (*offp > t->buffer->data_size - sizeof(*fp) || t->buffer->data_size < sizeof(*fp) || !IS_ALIGNED(*offp, sizeof(u32))) { binder_user_error("%d:%d got transaction with invalid offset, %lld\n", proc->pid, thread->pid, (u64)*offp); return_error = BR_FAILED_REPLY; goto err_bad_offset; } fp = (struct flat_binder_object *)(t->buffer->data + *offp); ⑦ switch (fp->type) { case BINDER_TYPE_BINDER: case BINDER_TYPE_WEAK_BINDER: { struct binder_ref *ref; struct binder_node *node = binder_get_node(proc, fp->binder); ⑧ if (node == NULL) { node = binder_new_node(proc, fp->binder, fp->cookie); if (node == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_new_node_failed; } node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK; node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS); } if (fp->cookie != node->cookie) { binder_user_error("%d:%d sending u%016llx node %d, cookie mismatch %016llx != %016llx\n", proc->pid, thread->pid, (u64)fp->binder, node->debug_id, (u64)fp->cookie, (u64)node->cookie); return_error = BR_FAILED_REPLY; goto err_binder_get_ref_for_node_failed; } if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) { return_error = BR_FAILED_REPLY; goto err_binder_get_ref_for_node_failed; } ref = binder_get_ref_for_node(target_proc, node); ⑨ if (ref == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_get_ref_for_node_failed; } if (fp->type == BINDER_TYPE_BINDER) ⑩ fp->type = BINDER_TYPE_HANDLE; else fp->type = BINDER_TYPE_WEAK_HANDLE; fp->handle = ref->desc; ①① binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo); trace_binder_transaction_node_to_ref(t, node, ref); binder_debug(BINDER_DEBUG_TRANSACTION, " node %d u%016llx -> ref %d desc %d\n", node->debug_id, (u64)node->ptr, ref->debug_id, ref->desc); } break; ………if (reply) { BUG_ON(t->buffer->async_transaction != 0); binder_pop_transaction(target_thread, in_reply_to); } else if (!(t->flags & TF_ONE_WAY)) { BUG_ON(t->buffer->async_transaction != 0); t->need_reply = 1; t->from_parent = thread->transaction_stack; thread->transaction_stack = t; } else { BUG_ON(target_node == NULL); BUG_ON(t->buffer->async_transaction != 1); if (target_node->has_async_transaction) { target_list = &target_node->async_todo; target_wait = NULL; } else target_node->has_async_transaction = 1; } t->work.type = BINDER_WORK_TRANSACTION; list_add_tail(&t->work.entry, target_list); ①② tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE; list_add_tail(&tcomplete->entry, &thread->todo); if (target_wait) wake_up_interruptible(target_wait); return;…..}①: 目标handle不为0时说明是客户端调用服务端的情况; ②: 目标handle为0,说明是请求ServiceManager服务,保存目标节点struct binder_node; ③: 根据binder_node获取到binder_proc; ④: 判断此次调用是否需要reply; ⑤: 根据transaction_stack找到目标线程(第一次传输不会进来); ⑥: 目标进程的在mmap空间分配一块buf,接着调用copy_from_use将用户空间数据拷贝进刚分配的buf中,这样目标进程可以直接读取数据; ⑦: 获取struct flat_binder_object的首地址, offp保存的是object距数据头的偏移值,详细可以看下C服务应用的3.2节; ⑧: 为新传进来的binder实体构造一个binder_node; ⑨: 查看目标进程的refs_by_node红黑树上是否有ref指向该节点,如果没有则为该目标进程创建一个ref指向该node; ⑩: 原来的类型是binder实体,现在要传给ServiceManager就需要改变为handle引用类型; ①①: 把刚才创建的ref中的des赋值给等下要串给应用层的数据, 接着给该节点增加一次引用标记且会将一些事务加入自身线程的todo链表; ①②: 将需要处理的事务加入目标进程或目标线程的todo链表,并唤醒它;现在ServiceManager进程的binder_proc中的refs_by_node红黑树上挂有一个新的binder_ref指向了传进来的binder实体;且这个新挂上去的binder_ref中desc成员为1(即传给应用层的handle),因为这是第一个指向该节点的引用,以后会递增;4.1.2 binder_thread_read第一次read:在写操作完后就开始读操作了,因为刚开始进程和线程的todo链表中没有需要处理的事务,再回复了BR_NOOP后就开始睡眠了; 写操作的时候有为创建的binder实体的node增加引用并加入了todo链表,这时led_control_service进程被唤醒; 开始处理BINDER_WORK_NODE事务,命令为BR_INCREFS, BR_ACQUIRE等;第二次read:这次read是在处理完BR_INCREFS, BR_ACQUIRE等命令以后,又一次读数据,并进入睡眠; PS: 这可以先不看,继续往下看ServiceManager被唤醒的流程; 好了,回到led_control_service进程了,被ServiceManager唤醒了; 这也没做啥事,就是构造数据后将其传回了用户空间; 接着发送释放buf的命令给内核空间,让它释放了内核mmap分配的数据;4.1.3 ServiceManager被唤醒4.1.3.1 binder_thread_read该函数解析详细看3.2.3,这里说下ServiceManager进程被唤醒后做了哪些事情,①: 先取出进程todo链表中需要处理的事务; ②: 再找到处理事务的struct binder_transaction结构体地址,取出刚才从发送进程拷贝进mmap分配的空间中数据进行处理; ③: 发送给ServiceManager的用户空间; 接着用户空间就会开始处理数据,数据格式如下: 用户空间在在收到数据后解析到BR_TRANSACTION命令后做了的流程分析见C服务应用篇2.4节;4.1.3.2 binder_thread_write应用层在调用do_add_serivice时最后还向驱动写入了两个命令(BC_ACQUIRE和BC_REQUEST_DEATH_NOTIFICATION);执行完fun后写入reply,又写入两个命令(BC_FREE_BUFFER和BC_REPLY);4.1.3.2.1 BC_ACQUIRE switch (cmd) { case BC_INCREFS: case BC_ACQUIRE: case BC_RELEASE: case BC_DECREFS: { uint32_t target; struct binder_ref *ref; const char *debug_string; if (get_user(target, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); if (target == 0 && binder_context_mgr_node && ① (cmd == BC_INCREFS || cmd == BC_ACQUIRE)) { ref = binder_get_ref_for_node(proc, binder_context_mgr_node); if (ref->desc != target) { binder_user_error("%d:%d tried to acquire reference to desc 0, got %d instead\n", proc->pid, thread->pid, ref->desc); } } else ref = binder_get_ref(proc, target); if (ref == NULL) { binder_user_error("%d:%d refcount change on invalid ref %d\n", proc->pid, thread->pid, target); break; } switch (cmd) { case BC_INCREFS: debug_string = “IncRefs”; binder_inc_ref(ref, 0, NULL); break; case BC_ACQUIRE: ② debug_string = “Acquire”; binder_inc_ref(ref, 1, NULL); break; case BC_RELEASE: debug_string = “Release”; binder_dec_ref(ref, 1); break; case BC_DECREFS: default: debug_string = “DecRefs”; binder_dec_ref(ref, 0); break; } binder_debug(BINDER_DEBUG_USER_REFS, “%d:%d %s ref %d desc %d s %d w %d for node %d\n”, proc->pid, thread->pid, debug_string, ref->debug_id, ref->desc, ref->strong, ref->weak, ref->node->debug_id); break; }①: 这个是根据传进来的handle获取binder_ref; ②: 对刚才获取到ref增加强引用;4.1.3.2.2 BC_REQUEST_DEATH_NOTIFICATIONcase BC_REQUEST_DEATH_NOTIFICATION: case BC_CLEAR_DEATH_NOTIFICATION: { uint32_t target; binder_uintptr_t cookie; struct binder_ref *ref; struct binder_ref_death *death; if (get_user(target, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); if (get_user(cookie, (binder_uintptr_t __user *)ptr)) return -EFAULT; ptr += sizeof(binder_uintptr_t); ref = binder_get_ref(proc, target); ① if (ref == NULL) { break; } if (cmd == BC_REQUEST_DEATH_NOTIFICATION) { if (ref->death) { break; } death = kzalloc(sizeof(*death), GFP_KERNEL); if (death == NULL) { thread->return_error = BR_ERROR; break; } binder_stats_created(BINDER_STAT_DEATH); INIT_LIST_HEAD(&death->work.entry); death->cookie = cookie; ref->death = death; ② if (ref->node->proc == NULL) { ref->death->work.type = BINDER_WORK_DEAD_BINDER; if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) { list_add_tail(&ref->death->work.entry, &thread->todo); } else { list_add_tail(&ref->death->work.entry, &proc->todo); wake_up_interruptible(&proc->wait); } } } else { if (ref->death == NULL) { break; } death = ref->death; if (death->cookie != cookie) { break; } ref->death = NULL; if (list_empty(&death->work.entry)) { death->work.type = BINDER_WORK_CLEAR_DEATH_NOTIFICATION; if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) { list_add_tail(&death->work.entry, &thread->todo); } else { list_add_tail(&death->work.entry, &proc->todo); wake_up_interruptible(&proc->wait); } } else { BUG_ON(death->work.type != BINDER_WORK_DEAD_BINDER); death->work.type = BINDER_WORK_DEAD_BINDER_AND_CLEAR; } } } break;①: 根据handle获取ref; ②: 讲死亡通知挂到ref的death节点上;这样操作后在这个ref指向的node节点的进程(这个场景为led_control_service),在死亡时会反馈给ServiceManager;4.1.3.2.3 BC_FREE_BUFFER case BC_FREE_BUFFER: { binder_uintptr_t data_ptr; struct binder_buffer *buffer; if (get_user(data_ptr, (binder_uintptr_t __user *)ptr)) return -EFAULT; ptr += sizeof(binder_uintptr_t); buffer = binder_buffer_lookup(proc, data_ptr); ① if (buffer == NULL) { break; } if (!buffer->allow_user_free) { break; } if (buffer->transaction) { buffer->transaction->buffer = NULL; buffer->transaction = NULL; } if (buffer->async_transaction && buffer->target_node) { BUG_ON(!buffer->target_node->has_async_transaction); if (list_empty(&buffer->target_node->async_todo)) buffer->target_node->has_async_transaction = 0; else list_move_tail(buffer->target_node->async_todo.next, &thread->todo); } trace_binder_transaction_buffer_release(buffer); binder_transaction_buffer_release(proc, buffer, NULL); ② binder_free_buf(proc, buffer); break; }①: 根据data.ptr.buffer的地址找到前面为拷贝led_control_service写入内核的数据而分配的mmap缓存区地址(详见4.1.1.2);②: 释放那块buf;这里需要注意data_ptr虽然是用户空间传来的,但是这也是由内核空间拷贝给用户空间的且该值在用户空间未改变;4.1.3.3.4 BC_REPLY case BC_TRANSACTION: case BC_REPLY: { struct binder_transaction_data tr; if (copy_from_user(&tr, ptr, sizeof(tr))) return -EFAULT; ptr += sizeof(tr); binder_transaction(proc, thread, &tr, cmd == BC_REPLY); break; }这个流程在前面将BC_TRANSACTION为讲解,这里我们单独讲解下;static void binder_transaction(struct binder_proc *proc, struct binder_thread *thread, struct binder_transaction_data *tr, int reply){ ……. if (reply) { in_reply_to = thread->transaction_stack; if (in_reply_to == NULL) { binder_user_error("%d:%d got reply transaction with no transaction stack\n", proc->pid, thread->pid); return_error = BR_FAILED_REPLY; goto err_empty_call_stack; } binder_set_nice(in_reply_to->saved_priority); if (in_reply_to->to_thread != thread) { return_error = BR_FAILED_REPLY; in_reply_to = NULL; goto err_bad_call_stack; } thread->transaction_stack = in_reply_to->to_parent; ① target_thread = in_reply_to->from; if (target_thread == NULL) { return_error = BR_DEAD_REPLY; goto err_dead_binder; } if (target_thread->transaction_stack != in_reply_to) { return_error = BR_FAILED_REPLY; in_reply_to = NULL; target_thread = NULL; goto err_dead_binder; } target_proc = target_thread->proc; ② } else { …… } ….. if (target_thread) { ③ e->to_thread = target_thread->pid; target_list = &target_thread->todo; target_wait = &target_thread->wait; } else { target_list = &target_proc->todo; target_wait = &target_proc->wait; } ……①: 从线程的传输栈上找到目标线程(当前进程为ServiceManager进程); ②: 通过目标线程查找到目标线程; ③: 这里可以看出reply是用线程的来完成的,因为是将要处理事情的是线程的todo链表;接下来从用户空间拷贝数据,然后在将事务挂到目标线程的todo链表,再唤醒目标线程; 这样又回到了led_control_service进程了,请看4.1.2;4.2 设置线程上限 case BINDER_SET_MAX_THREADS: if (copy_from_user(&proc->max_threads, ubuf, sizeof(proc->max_threads))) { ret = -EINVAL; goto err; } break;将上限值拷贝到proc的max_threads成员中保存;4.3 总结流程从注册服务开始说起:五. mmap用点以下进程都是在内核态的描述;5.1 内核态在查看驱动源码时,发现注册服务时led_control_service进程将用户空间数据拷贝到内核后,再唤醒ServiceManager进程后,ServiceManager进程内核空间可以直接使用;5.2 用户态还有一点,在ServiceManager将内核空间数据拷贝到用户空间时,仅仅只是把刚才在led_control_service进程分配的mmap空间的地址传给了 ServiceManager的用户空间,而用户空间可以通过该地址直接访问数据了; 以上为mmap在binder的两点用法,跨进程,内核; PS: 篇幅太长了,服务的获取流程下篇再讲,binder的知识点很多,还有好多需要更新; 看我写的东西一点要从上往下看, C语言写的多了养成的习惯; ...

November 25, 2018 · 16 min · jiezi

Binder机制情景分析之linux环境适配

binder安装一. 环境- 运行环境:linux4.1.15- 开发板为天嵌imx6ul二. 内核修改2.1 打开内核配置菜单make menuconfig 2.2 修改配置配置驱动转到Device Drivers->Android,选中Andoid Drivers和Android Binder IPC Driver 示例如下: 配置binder驱动中使用到接口转到Device Drivers->Staging drivers->Android,选中: Enable the Anonymous Shared Memory Subsystem, Synchronization framework, Software synchronization objects, Userspace API for SW_SYNC 示例如下: 2.3 重新编译make zImage -j4 三. 查看将重新编译好的内核更新到开发板中; 用ls命令查看/dev下是否有个设备为binder

November 20, 2018 · 1 min · jiezi

Binder机制情景分析之C服务应用

一. 概述这里只讲下binder的实现原理,不牵扯到android的java层是如何调用; 涉及到的会有ServiceManager,led_control_server和test_client的代码,这些都是用c写的.其中led_control_server和test_client是 仿照bctest.c写的; 在linux平台下运行binder更容易分析binder机制实现的原理(可以增加大量的log,进行分析);在Linux运行时.先运行ServiceManager,再运行led_control_server最后运行test_client;1.1 Binder通信模型Binder通信采用C/S架构,从组件视角来说,包含Client、Server、ServiceManager以及binder驱动,其中ServiceManager用于管理系统中的各种服务。1.2 运行环境本文中的代码运行环境是在imx6ul上跑的,运行的是linux系统,内核版本4.10(非android环境分析);1.3 文章代码文章所有代码已上传https://github.com/SourceLink…二. ServiceManager涉及到的源码地址:frameworks/native/cmds/servicemanager/sevice_manager.c frameworks/native/cmds/servicemanager/binder.c frameworks/native/cmds/servicemanager/bctest.cServiceManager相当于binder通信过程中的守护进程,本身也是个binder服务、好比一个root管理员一样; 主要功能是查询和注册服务;接下来结合代码从main开始分析下serviceManager的服务过程;2.1 main源码中的sevice_manager.c中主函数中使用了selinux,为了在我板子的linux环境中运行,把这些代码屏蔽,删减后如下:int main(int argc, char **argv){ struct binder_state bs; bs = binder_open(1281024); ① if (!bs) { ALOGE(“failed to open binder driver\n”); return -1; } if (binder_become_context_manager(bs)) { ② ALOGE(“cannot become context manager (%s)\n”, strerror(errno)); return -1; } svcmgr_handle = BINDER_SERVICE_MANAGER; binder_loop(bs, svcmgr_handler); ③ return 0;}①: 打开binder驱动(详见2.2.1) ②: 注册为管理员(详见2.2.2) ③: 进入循环,处理消息(详见2.2.3)从主函数的启动流程就能看出sevice_manager的工作流程并不是特别复杂; 其实client和server的启动流程和manager的启动类似,后面再详细分析;2.2 binder_openstruct binder_state *binder_open(size_t mapsize){ struct binder_state *bs; struct binder_version vers; bs = malloc(sizeof(*bs)); if (!bs) { errno = ENOMEM; return NULL; } bs->fd = open("/dev/binder", O_RDWR); ① if (bs->fd < 0) { fprintf(stderr,“binder: cannot open device (%s)\n”, strerror(errno)); goto fail_open; } if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) || ② (vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) { fprintf(stderr, “binder: driver version differs from user space\n”); goto fail_open; } bs->mapsize = mapsize; bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0); ③ if (bs->mapped == MAP_FAILED) { fprintf(stderr,“binder: cannot map device (%s)\n”, strerror(errno)); goto fail_map; } return bs;fail_map: close(bs->fd);fail_open: free(bs); return NULL;}①: 打开binder设备 ②: 通过ioctl获取binder版本号 ③: mmp内存映射这里说明下为什么binder驱动是用ioctl来操作,是因为ioctl可以同时进行读和写操作;2.2 binder_become_context_managerint binder_become_context_manager(struct binder_state *bs){ return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);}还是通过ioctl请求类型BINDER_SET_CONTEXT_MGR注册成manager;2.3 binder_loopvoid binder_loop(struct binder_state *bs, binder_handler func){ int res; struct binder_write_read bwr; uint32_t readbuf[32]; bwr.write_size = 0; bwr.write_consumed = 0; bwr.write_buffer = 0; readbuf[0] = BC_ENTER_LOOPER; binder_write(bs, readbuf, sizeof(uint32_t)); ① for (;;) { bwr.read_size = sizeof(readbuf); bwr.read_consumed = 0; bwr.read_buffer = (uintptr_t) readbuf; res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); ② if (res < 0) { ALOGE(“binder_loop: ioctl failed (%s)\n”, strerror(errno)); break; } res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func); ③ if (res == 0) { ALOGE(“binder_loop: unexpected reply?!\n”); break; } if (res < 0) { ALOGE(“binder_loop: io error %d %s\n”, res, strerror(errno)); break; } }}①: 写入命令BC_ENTER_LOOPER通知驱动该线程已经进入主循环,可以接收数据; ②: 先读一次数据,因为刚才写过一次; ③: 然后解析读出来的数据(详见2.2.4);binder_loop函数的主要流程如下: 2.4 binder_parseint binder_parse(struct binder_state *bs, struct binder_io *bio, uintptr_t ptr, size_t size, binder_handler func){ int r = 1; uintptr_t end = ptr + (uintptr_t) size; while (ptr < end) { uint32_t cmd = *(uint32_t ) ptr; ptr += sizeof(uint32_t);#if TRACE fprintf(stderr,"%s:\n", cmd_name(cmd));#endif switch(cmd) { case BR_NOOP: break; case BR_TRANSACTION_COMPLETE: / check服务 */ break; case BR_INCREFS: case BR_ACQUIRE: case BR_RELEASE: case BR_DECREFS:#if TRACE fprintf(stderr," %p, %p\n", (void *)ptr, (void *)(ptr + sizeof(void )));#endif ptr += sizeof(struct binder_ptr_cookie); break; case BR_SPAWN_LOOPER: { / create new thread / //if (fork() == 0) { //} pthread_t thread; struct binder_thread_desc btd; btd.bs = bs; btd.func = func; pthread_create(&thread, NULL, binder_thread_routine, &btd); / in new thread: ioctl(BC_ENTER_LOOPER), enter binder_looper */ break; } case BR_TRANSACTION: { struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr; if ((end - ptr) < sizeof(*txn)) { ALOGE(“parse: txn too small!\n”); return -1; } if (func) { unsigned rdata[256/4]; struct binder_io msg; struct binder_io reply; int res; bio_init(&reply, rdata, sizeof(rdata), 4); ① bio_init_from_txn(&msg, txn); res = func(bs, txn, &msg, &reply); ② binder_send_reply(bs, &reply, txn->data.ptr.buffer, res); ③ } ptr += sizeof(*txn); break; } case BR_REPLY: { struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr; if ((end - ptr) < sizeof(txn)) { ALOGE(“parse: reply too small!\n”); return -1; } binder_dump_txn(txn); if (bio) { bio_init_from_txn(bio, txn); bio = 0; } else { / todo FREE BUFFER */ } ptr += sizeof(*txn); r = 0; break; } case BR_DEAD_BINDER: { struct binder_death *death = (struct binder_death *)(uintptr_t) *(binder_uintptr_t *)ptr; ptr += sizeof(binder_uintptr_t); death->func(bs, death->ptr); break; } case BR_FAILED_REPLY: r = -1; break; case BR_DEAD_REPLY: r = -1; break; default: ALOGE(“parse: OOPS %d\n”, cmd); return -1; } } return r;}①: 按照一定的格式初始化rdata数据,请注意这里rdata是在用户空间创建的buf; ②: 调用设置进来的处理函数svcmgr_handler(详见2.2.5); ③: 发送回复信息;这个函数我们只重点关注下BR_TRANSACTION其他的命令含义可以参考表格A;2.5 svcmgr_handlerint svcmgr_handler(struct binder_state *bs, struct binder_transaction_data *txn, struct binder_io *msg, struct binder_io *reply){ struct svcinfo *si; uint16_t *s; size_t len; uint32_t handle; uint32_t strict_policy; int allow_isolated; //ALOGI(“target=%x code=%d pid=%d uid=%d\n”, // txn->target.handle, txn->code, txn->sender_pid, txn->sender_euid); if (txn->target.handle != svcmgr_handle) return -1; if (txn->code == PING_TRANSACTION) return 0; // Equivalent to Parcel::enforceInterface(), reading the RPC // header with the strict mode policy mask and the interface name. // Note that we ignore the strict_policy and don’t propagate it // further (since we do no outbound RPCs anyway). strict_policy = bio_get_uint32(msg); ① s = bio_get_string16(msg, &len); if (s == NULL) { return -1; } if ((len != (sizeof(svcmgr_id) / 2)) || ② memcmp(svcmgr_id, s, sizeof(svcmgr_id))) { fprintf(stderr,“invalid id %s\n”, str8(s, len)); return -1; } switch(txn->code) { ③ case SVC_MGR_GET_SERVICE: case SVC_MGR_CHECK_SERVICE: s = bio_get_string16(msg, &len); if (s == NULL) { return -1; } handle = do_find_service(bs, s, len, txn->sender_euid, txn->sender_pid); ④ if (!handle) break; bio_put_ref(reply, handle); return 0; case SVC_MGR_ADD_SERVICE: s = bio_get_string16(msg, &len); if (s == NULL) { return -1; } handle = bio_get_ref(msg); allow_isolated = bio_get_uint32(msg) ? 1 : 0; if (do_add_service(bs, s, len, handle, txn->sender_euid, ⑤ allow_isolated, txn->sender_pid)) return -1; break; case SVC_MGR_LIST_SERVICES: { uint32_t n = bio_get_uint32(msg); if (!svc_can_list(txn->sender_pid)) { ALOGE(“list_service() uid=%d - PERMISSION DENIED\n”, txn->sender_euid); return -1; } si = svclist; while ((n– > 0) && si) ⑥ si = si->next; if (si) { bio_put_string16(reply, si->name); return 0; } return -1; } default: ALOGE(“unknown code %d\n”, txn->code); return -1; } bio_put_uint32(reply, 0); return 0;}①: 获取帧头数据,一般为0,因为发送方发送数据时都会在数据最前方填充4个字节0数据(分配数据空间的最小单位4字节); ②: 对比svcmgr_id是否和我们原来定义相同#define SVC_MGR_NAME “linux.os.ServiceManager”(我改写了); ③: 根据code 做对应的事情,就想到与根据编码去执行对应的fun(client请求服务后去执行服务,service也是根据不同的code来执行。接下来会举例说明);、④: 从服务名在server链表中查找对应的服务,并返回handle(详见2.2.6); ⑤: 添加服务,一般都是service发起的请求。将handle和服务名添加到服务链表中(这里的handle是由binder驱动分配); ⑥: 查找server_manager中链表中第n个服务的名字(该数值由查询端决定);2.6 do_find_serviceuint32_t do_find_service(struct binder_state *bs, const uint16_t *s, size_t len, uid_t uid, pid_t spid){ struct svcinfo *si; if (!svc_can_find(s, len, spid)) { ① ALOGE(“find_service(’%s’) uid=%d - PERMISSION DENIED\n”, str8(s, len), uid); return 0; } si = find_svc(s, len); ② //ALOGI(“check_service(’%s’) handle = %x\n”, str8(s, len), si ? si->handle : 0); if (si && si->handle) { if (!si->allow_isolated) { ③ // If this service doesn’t allow access from isolated processes, // then check the uid to see if it is isolated. uid_t appid = uid % AID_USER; if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) { return 0; } } return si->handle; ④ } else { return 0; }}①: 检测调用进程是否有权限请求服务(这里用selinux管理权限,为了让代码可以方便允许,这里面的代码有做删减); ②: 遍历server_manager服务链表; ③: 如果binder服务不允许服务从沙箱中访问,则执行下面检查; ④: 返回查询到handle;do_find_service函数主要工作是搜索服务链表,返回查找到的服务2.7 do_add_serviceint do_add_service(struct binder_state *bs, const uint16_t *s, size_t len, uint32_t handle, uid_t uid, int allow_isolated, pid_t spid){ struct svcinfo *si; //ALOGI(“add_service(’%s’,%x,%s) uid=%d\n”, str8(s, len), handle, // allow_isolated ? “allow_isolated” : “!allow_isolated”, uid); if (!handle || (len == 0) || (len > 127)) return -1; if (!svc_can_register(s, len, spid)) { ① ALOGE(“add_service(’%s’,%x) uid=%d - PERMISSION DENIED\n”, str8(s, len), handle, uid); return -1; } si = find_svc(s, len); ② if (si) { if (si->handle) { ALOGE(“add_service(’%s’,%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n”, str8(s, len), handle, uid); svcinfo_death(bs, si); } si->handle = handle; } else { ③ si = malloc(sizeof(si) + (len + 1) * sizeof(uint16_t)); if (!si) { ALOGE(“add_service(’%s’,%x) uid=%d - OUT OF MEMORY\n”, str8(s, len), handle, uid); return -1; } si->handle = handle; si->len = len; memcpy(si->name, s, (len + 1) * sizeof(uint16_t)); si->name[len] = ‘\0’; si->death.func = (void) svcinfo_death; si->death.ptr = si; si->allow_isolated = allow_isolated; si->next = svclist; svclist = si; } ALOGI(“add_service(’%s’), handle = %d\n”, str8(s, len), handle); binder_acquire(bs, handle); ④ binder_link_to_death(bs, handle, &si->death); ⑤ return 0;}①: 判断请求进程是否有权限注册服务; ②: 查找ServiceManager的服务链表中是否已经注册了该服务,如果有则通知驱动杀死原先的binder服务,然后更新最新的binder服务; ③: 如果原来没有创建该binder服务,则进行一系列的赋值,再插入到服务链表的表头; ④: 增加binder服务的引用计数; ⑤: 告诉驱动接收服务的死亡通知;2.8 调用时序图从上面分析,可以知道ServiceManager的主要工作流程如下: 三. led_control_server3.1 mainint main(int argc, char **argv) { int fd; struct binder_state bs; uint32_t svcmgr = BINDER_SERVICE_MANAGER; uint32_t handle; int ret; struct register_server led_control[3] = { ① [0] = { .code = 1, .fun = led_on } , [1] = { .code = 2, .fun = led_off } }; bs = binder_open(1281024); ② if (!bs) { ALOGE(“failed to open binder driver\n”); return -1; } ret = svcmgr_publish(bs, svcmgr, LED_CONTROL_SERVER_NAME, led_control); ③ if (ret) { ALOGE(“failed to publish %s service\n”, LED_CONTROL_SERVER_NAME); return -1; } binder_set_maxthreads(bs, 10); ④ binder_loop(bs, led_control_server_handler); ⑤ return 0;}①: led_control_server提供的服务函数; ②: 初始化binder组件( 详见2.2); ③: 注册服务,svcmgr是发送的目标, LED_CONTROL_SERVER_NAME注册的服务名, led_control注册的binder实体; ④: 设置创建线程最大数(详见3.5); ⑤: 进入线程循环(详见2.3);3.2 svcmgr_publishint svcmgr_publish(struct binder_state *bs, uint32_t target, const char *name, void *ptr){ int status; unsigned iodata[512/4]; struct binder_io msg, reply; bio_init(&msg, iodata, sizeof(iodata), 4); ① bio_put_uint32(&msg, 0); // strict mode header bio_put_string16_x(&msg, SVC_MGR_NAME); bio_put_string16_x(&msg, name); bio_put_obj(&msg, ptr); if (binder_call(bs, &msg, &reply, target, SVC_MGR_ADD_SERVICE)) ② return -1; status = bio_get_uint32(&reply); ③ binder_done(bs, &msg, &reply); ④ return status;}①: 初始化用户空间的数据iodata,设置了四个字节的offs,接着按一定格式往buf里面填充数据; ②: 调用ServiceManager服务的SVC_MGR_ADD_SERVICE功能; ③: 获取ServiceManager回复数据,成功返回0; ④: 结束注册过程,释放内核中刚才交互分配的buf;3.2.1 bio_initvoid bio_init(struct binder_io *bio, void *data, size_t maxdata, size_t maxoffs){ size_t n = maxoffs * sizeof(size_t); if (n > maxdata) { bio->flags = BIO_F_OVERFLOW; bio->data_avail = 0; bio->offs_avail = 0; return; } bio->data = bio->data0 = (char *) data + n; ① bio->offs = bio->offs0 = data; ② bio->data_avail = maxdata - n; ③ bio->offs_avail = maxoffs; ④ bio->flags = 0; ⑤}①: 根据传进来的参数,留下一定长度的offs数据空间, data指针则从 data + n开始; ②: offs指针则从 data开始,则offs可使用的数据空间只有n个字节; ③: 可使用的data空间计数; ④: 可使用的offs空间计数; ⑤: 清除buf的flag;init后此时buf空间的分配情况如下图:3.2.2 bio_put_uint32void bio_put_uint32(struct binder_io *bio, uint32_t n){ uint32_t *ptr = bio_alloc(bio, sizeof(n)); if (ptr) *ptr = n;}这个函数往buf里面填充一个uint32的数据,这个数据的最小单位为4个字节; 前面svcmgr_publish调用bio_put_uint32(&msg, 0);,实质buf中的数据是00 00 00 00 ;3.2.3 bio_allocstatic void *bio_alloc(struct binder_io *bio, size_t size){ size = (size + 3) & (~3); if (size > bio->data_avail) { bio->flags |= BIO_F_OVERFLOW; return NULL; } else { void *ptr = bio->data; bio->data += size; bio->data_avail -= size; return ptr; }}这个函数分配的数据宽度为4的倍数,先判断当前可使用的数据宽度是否小于待分配的宽度; 如果小于则置标志BIO_F_OVERFLOW否则分配数据,并对data往后偏移size个字节,可使用数据宽度data_avail减去size个字节;3.2.4 bio_put_string16_xvoid bio_put_string16_x(struct binder_io *bio, const char *_str){ unsigned char str = (unsigned char) _str; size_t len; uint16_t ptr; if (!str) { ① bio_put_uint32(bio, 0xffffffff); return; } len = strlen(_str); if (len >= (MAX_BIO_SIZE / sizeof(uint16_t))) { bio_put_uint32(bio, 0xffffffff); return; } / Note: The payload will carry 32bit size instead of size_t */ bio_put_uint32(bio, len); ptr = bio_alloc(bio, (len + 1) * sizeof(uint16_t)); if (!ptr) return; while (*str) ② *ptr++ = *str++; *ptr++ = 0;}①: 这里到bio_alloc前都是为了计算和判断自己串的长度再填充到buf中; ②: 填充字符串到buf中,一个字符占两个字节,注意 uint16_t *ptr;;3.2.5 bio_put_objvoid bio_put_obj(struct binder_io *bio, void *ptr){ struct flat_binder_object obj; obj = bio_alloc_obj(bio); ① if (!obj) return; obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS; obj->type = BINDER_TYPE_BINDER; ② obj->binder = (uintptr_t)ptr; ③ obj->cookie = 0;}struct flat_binder_object {/ WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS / __u32 type; __u32 flags; union { binder_uintptr_t binder;/ WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ __u32 handle; }; binder_uintptr_t cookie;};①: 分配一个flat_binder_object大小的空间(详见3.2.6); ②: type的类型为BINDER_TYPE_BINDER时则type传入的是binder实体,一般是服务端注册服务时传入; type的类型为BINDER_TYPE_HANDLE时则type传入的为handle,一般由客户端请求服务时; ③: obj->binder值,跟随type改变;3.2.6 bio_alloc_objstatic struct flat_binder_object *bio_alloc_obj(struct binder_io *bio){ struct flat_binder_object *obj; obj = bio_alloc(bio, sizeof(obj)); ① if (obj && bio->offs_avail) { bio->offs_avail–; bio->offs++ = ((char) obj) - ((char) bio->data0); ② return obj; } bio->flags |= BIO_F_OVERFLOW; return NULL;}①: 在data后分配struct flat_binder_object长度的空间; ②: bio->offs空间记下此时插入obj,相对于data0的偏移值;看到这终于知道offs是干嘛的了,原来是用来记录数据中是否有obj类型的数据;3.2.7 完整数据格式图综上分析,传输一次完整的数据的格式如下:3.3 binder_callint binder_call(struct binder_state *bs, struct binder_io msg, struct binder_io reply, uint32_t target, uint32_t code){ int res; struct binder_write_read bwr; struct { uint32_t cmd; struct binder_transaction_data txn; } attribute((packed)) writebuf; unsigned readbuf[32]; if (msg->flags & BIO_F_OVERFLOW) { fprintf(stderr,“binder: txn buffer overflow\n”); goto fail; } writebuf.cmd = BC_TRANSACTION; // binder call transaction writebuf.txn.target.handle = target; ① writebuf.txn.code = code; ② writebuf.txn.flags = 0; writebuf.txn.data_size = msg->data - msg->data0; ③ writebuf.txn.offsets_size = ((char) msg->offs) - ((char) msg->offs0); writebuf.txn.data.ptr.buffer = (uintptr_t)msg->data0; writebuf.txn.data.ptr.offsets = (uintptr_t)msg->offs0; bwr.write_size = sizeof(writebuf); ④ bwr.write_consumed = 0; bwr.write_buffer = (uintptr_t) &writebuf; for (;;) { bwr.read_size = sizeof(readbuf); bwr.read_consumed = 0; bwr.read_buffer = (uintptr_t) readbuf; res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); ⑤ if (res < 0) { fprintf(stderr,“binder: ioctl failed (%s)\n”, strerror(errno)); goto fail; } res = binder_parse(bs, reply, (uintptr_t) readbuf, bwr.read_consumed, 0); ⑥ if (res == 0) return 0; if (res < 0) goto fail; }fail: memset(reply, 0, sizeof(*reply)); reply->flags |= BIO_F_IOERROR; return -1;}①: 这个target就是我们这次请求服务的目标,即ServiceManager; ②: code是我们请求服务的功能码,由服务端提供; ③: 把binder_io数据转化成binder_transaction_data数据; ④: 驱动进行读写是根据这个size来的,分析驱动的时候再详细分析; ⑤: 进行一次读写; ⑥: 解析发送的后返回的数据,判断是否注册成功;3.4 binder_donevoid binder_done(struct binder_state *bs, struct binder_io *msg, struct binder_io *reply){ struct { uint32_t cmd; uintptr_t buffer; } attribute((packed)) data; if (reply->flags & BIO_F_SHARED) { data.cmd = BC_FREE_BUFFER; data.buffer = (uintptr_t) reply->data0; binder_write(bs, &data, sizeof(data)); reply->flags = 0; }}这个函数比较简单发送BC_FREE_BUFFER命令给驱动,让驱动释放内核态由刚才交互分配的buf;3.5 binder_set_maxthreadsvoid binder_set_maxthreads(struct binder_state *bs, int threads){ ioctl(bs->fd, BINDER_SET_MAX_THREADS, &threads);}这里主要调用ioctl函数写入命令BINDER_SET_MAX_THREADS进行设置最大线程数;3.6 调用时序图led_control_server主要提供led的控制服务,具体的流程如下: 四. test_client4.1 mainint main(int argc, char **argv){ struct binder_state bs; uint32_t svcmgr = BINDER_SERVICE_MANAGER; unsigned int g_led_control_handle; if (argc < 3) { ALOGE(“Usage:\n”); ALOGE("%s led <on|off>\n", argv[0]); return -1; } bs = binder_open(1281024); ① if (!bs) { ALOGE(“failed to open binder driver\n”); return -1; } g_led_control_handle = svcmgr_lookup(bs, svcmgr, LED_CONTROL_SERVER_NAME); ② if (!g_led_control_handle) { ALOGE( “failed to get led control service\n”); return -1; } ALOGI(“Handle for led control service = %d\n”, g_led_control_handle); if (!strcmp(argv[1], “led”)) { if (!strcmp(argv[2], “on”)) { if (interface_led_on(bs, g_led_control_handle, 2) == 0) { ③ ALOGI(“led was on\n”); } } else if (!strcmp(argv[2], “off”)) { if (interface_led_off(bs, g_led_control_handle, 2) == 0) { ALOGI(“led was off\n”); } } } binder_release(bs, g_led_control_handle); ④ return 0;}①: 打开binder设备(详见2.2); ②: 根据名字获取led控制服务; ③: 根据获取到的handle,调用led控制服务(详见4.3); ④: 释放服务;client的流程也很简单,按步骤1.2.3.4读下来就是了;4.2 svcmgr_lookupuint32_t svcmgr_lookup(struct binder_state *bs, uint32_t target, const char *name){ uint32_t handle; unsigned iodata[512/4]; struct binder_io msg, reply; bio_init(&msg, iodata, sizeof(iodata), 4); ① bio_put_uint32(&msg, 0); // strict mode header bio_put_string16_x(&msg, SVC_MGR_NAME); bio_put_string16_x(&msg, name); if (binder_call(bs, &msg, &reply, target, SVC_MGR_GET_SERVICE)) ② return 0; handle = bio_get_ref(&reply); ③ if (handle) binder_acquire(bs, handle); ④ binder_done(bs, &msg, &reply); ⑤ return handle;}①: 因为是请求服务,所以这里不用添加binder实体数据,具体的参考3.2,这里就不重复解释了; ②: 向target进程(ServiceManager)请求获取led_control服务(详细参考3.3); ③: 从ServiceManager返回的数据buf中获取led_control服务的handle; ④: 增加该handle的引用计数; ⑤: 释放内核空间buf(详3.4);4.2.1 bio_get_refuint32_t bio_get_ref(struct binder_io *bio){ struct flat_binder_object *obj; obj = _bio_get_obj(bio); ① if (!obj) return 0; if (obj->type == BINDER_TYPE_HANDLE) ② return obj->handle; return 0;}①: 把bio的数据转化成flat_binder_object格式; ②: 判断binder数据类型是否为引用,是则返回获取到的handle;4.2.2 _bio_get_objstatic struct flat_binder_object *_bio_get_obj(struct binder_io bio){ size_t n; size_t off = bio->data - bio->data0; ① / TODO: be smarter about this? */ for (n = 0; n < bio->offs_avail; n++) { if (bio->offs[n] == off) return bio_get(bio, sizeof(struct flat_binder_object)); ② } bio->data_avail = 0; bio->flags |= BIO_F_OVERFLOW; return NULL;}①: 一般情况下该值都为0,因为在reply时获取ServiceManager传来的数据,bio->data和bio->data都指向同一个地址; ②: 获取到struct flat_binder_object数据的头指针;从ServiceManager传来的数据是struct flat_binder_object的数据,格式如下: 4.3 interface_led_onint interface_led_on(struct binder_state *bs, unsigned int handle, unsigned char led_enum){ unsigned iodata[512/4]; struct binder_io msg, reply; int ret = -1; int exception; bio_init(&msg, iodata, sizeof(iodata), 4); bio_put_uint32(&msg, 0); // strict mode header bio_put_uint32(&msg, led_enum); if (binder_call(bs, &msg, &reply, handle, LED_CONTROL_ON)) return ret; exception = bio_get_uint32(&reply); if (exception == 0) ret = bio_get_uint32(&reply); binder_done(bs, &msg, &reply); return ret;}这个流程和前面svcmgr_lookup的请求服务差不多,只是最后是获取led_control_server的返回值. 注意这里为什么获取了两次uint32类型的数据,这是因为服务方在回复数据的时候添加了头帧,这个是可以调节的,非规则;4.4 binder_releasevoid binder_release(struct binder_state *bs, uint32_t target){ uint32_t cmd[2]; cmd[0] = BC_RELEASE; cmd[1] = target; binder_write(bs, cmd, sizeof(cmd));}通知驱动层减小对target进程的引用,结合驱动讲解就更能明白了;4.5 调用时序图test_client的调用时序如下,过程和led_control_server的调用过程相识: A: 表BR_含义BR个人理解是缩写为binder reply消息含义参数BR_ERROR发生内部错误(如内存分配失败)—BR_OK BR_NOOP操作完成—BR_SPAWN_LOOPER该消息用于接收方线程池管理。当驱动发现接收方所有线程都处于忙碌状态且线程池里的线程总数没有超过BINDER_SET_MAX_THREADS设置的最大线程数时,向接收方发送该命令要求创建更多线程以备接收数据。—BR_TRANSACTION对应发送方的BC_TRANSACTIONbinder_transaction_dataBR_REPLY对应发送方BC_REPLY的回复binder_transaction_dataBR_ACQUIRE_RESULT BR_FINISHED未使用—BR_DEAD_REPLY交互时向驱动发送binder调用,如果对方已经死亡,则驱动回应此命令—BR_TRANSACTION_COMPLETE发送方通过BC_TRANSACTION或BC_REPLY发送完一个数据包后,都能收到该消息做为成功发送的反馈。这和BR_REPLY不一样,是驱动告知发送方已经发送成功,而不是Server端返回请求数据。所以不管同步还是异步交互接收方都能获得本消息。—BR_INCREFS BR_ACQUIRE BR_RELEASE BR_DECREFS这一组消息用于管理强/弱指针的引用计数。只有提供Binder实体的进程才能收到这组消息。binder_uintptr_t binder:Binder实体在用户空间中的指针 binder_uintptr_t cookie:与该实体相关的附加数据BR_DEAD_BINDER 向获得Binder引用的进程发送Binder实体死亡通知书;收到死亡通知书的进程接下来会返回BC_DEAD_BINDER_DONE做确认。—BR_CLEAR_DEATH_NOTIFICATION_DONE回应命令BC_REQUEST_DEATH_NOTIFICATION—BR_FAILED_REPLY如果发送非法引用号则返回该消息—B: 表BC_含义BC个人理解是缩写为binder call or cmd消息含义参数BC_TRANSACTION BC_REPLYBC_TRANSACTION用于Client向Server发送请求数据;BC_REPLY用于Server向Client发送回复(应答)数据。其后面紧接着一个binder_transaction_data结构体表明要写入的数据。struct binder_transaction_dataBC_ACQUIRE_RESULT BC_ATTEMPT_ACQUIRE未使用—BC_FREE_BUFFER请求驱动释放调刚在内核空间创建用来保存用户空间数据的内存块—BC_INCREFS BC_ACQUIRE BC_RELEASE BC_DECREFS这组命令增加或减少Binder的引用计数,用以实现强指针或弱指针的功能。—BC_INCREFS_DONE BC_ACQUIRE_DONE第一次增加Binder实体引用计数时,驱动向Binder实体所在的进程发送BR_INCREFS, BR_ACQUIRE消息;Binder实体所在的进程处理完毕回馈BC_INCREFS_DONE, BC_ACQUIRE_DONE—BC_REGISTER_LOOPER BC_ENTER_LOOPER BC_EXIT_LOOPER这组命令同BINDER_SET_MAX_THREADS一道实现Binder驱 动对接收方线程池管理。BC_REGISTER_LOOPER通知驱动线程池中一个线程已经创建了;BC_ENTER_LOOPER通知驱动该线程已经进入主循环,可以接收数据;BC_EXIT_LOOPER通知驱动该线程退出主循环,不再接收数据。—BC_REQUEST_DEATH_NOTIFICATION获得Binder引用的进程通过该命令要求驱动在Binder实体销毁得到通知。虽说强指针可以确保只要有引用就不会销毁实体,但这毕竟是个跨进程的引用,谁也无法保证实体由于所在的Server关闭Binder驱动或异常退出而消失,引用者能做的是要求Server在此刻给出通知。—BC_DEAD_BINDER_DONE收到实体死亡通知书的进程在删除引用后用本命令告知驱动。—参考表格参考博客:https://blog.csdn.net/univers… ...

November 20, 2018 · 11 min · jiezi