轉自:http://blog.csdn.net/ear5cm/article/details/45093807
Android HardwareComposer中的fence機制中讨論了hwc中的fence,hwc最終把layer的acqireFenceFd送進fb driver,再由fb drvier生成新的reitreFenceFd并return回user space.本篇文章我們來探讨下fb driver中的fence,看看S3CFB_WIN_CONFIG ioctl都做了些什麼.
kernel代碼下載下傳位址: https://android.googlesource.com/kernel/exynos.git
文章中用到的code:
exynos/include/linux/sync.h
exynos/drivers/base/sync.c
exynos/include/linux/sw_sync.h
exynos/drivers/base/sw_sync.c
exynos/drivers/video/s3c-fb.c
在讨論fb driver中的fence之前,先來簡單介紹幾個和fence相關的基本資料結構:
[cpp] view plain copy
- struct sync_timeline {
- struct kref kref;
- const struct sync_timeline_ops *ops;
- char name[32];
- bool destroyed;
- struct list_head child_list_head;
- spinlock_t child_list_lock;
- struct list_head active_list_head;
- spinlock_t active_list_lock;
- struct list_head sync_timeline_list;
- };
sync_timeline中包含一個由list_head串起來的sync_pt雙向連結清單child_list_head.
[cpp] view plain copy
- struct sync_pt {
- struct sync_timeline *parent;
- struct list_head child_list;
- struct list_head active_list;
- struct list_head signaled_list;
- struct sync_fence *fence;
- struct list_head pt_list;
- int status;
- ktime_t timestamp;
- };
sync_pt中parent指針指向了sync_pt所屬的sync_timeline,child_list表示了sync_pt在sync_timeline.child_list_head中的位置.fence指針指向了sync_pt所屬的fence,pt_list表示了sync_pt在fence.pt_list_head中的位置.
[cpp] view plain copy
- struct sync_fence {
- struct file *file;
- struct kref kref;
- char name[32];
- struct list_head pt_list_head;
- struct list_head waiter_list_head;
- spinlock_t waiter_list_lock;
- int status;
- wait_queue_head_t wq;
- struct list_head sync_fence_list;
- };
file指針表示fence所對應的file,linux中一切皆是file.pt_list_head是一個由list_head串起來的sync_pt雙向連結清單.
sync_timeline,sync_pt和sync_fence的關系可以用下面的圖來表示:
![](https://img.laitimes.com/img/__Qf2AjLwojIjJCLyojI0JCLiIXZ05WZD9CX5RXa2Fmcn9CXwczLcVmds92czlGZvwVP9EUTDZ0aRJkSwk0LcxGbpZ2LcBDM08CXlpXazRnbvZ2LcRlMMVDT2EWNvwFdu9mZvwFdOdlT5Z0VaZXUYpVd1kmYr50MZV3YyI2cKJDT29GRjBjUIF2LcRHelR3LcJzLctmch1mclRXY39TNykzNwkTMzEzNxQDM1EDMy8CX0Vmbu4GZzNmLn9Gbi1yZtl2Lc9CX6MHc0RHaiojIsJye.jpg)
syc_timeline來管理所有在這條timeline上的sync_pt,可以決定sync_pt何時被signal.sync_fence中可以包含一個或者多個sync_pt,當sync_fence中所有的sync_pt被signal的時候,sync_fence被signal.
不過sync_timeline sync_pt有點像一個virtual class,真正在使用的時候需要"繼承"它,并實作它定義的sync_timeline_ops *ops接口,s3c-fb.c 使用的是sw_sync_timeline和sw_sync_pt,定義如下:
[cpp] view plain copy
- struct sw_sync_timeline {
- struct sync_timeline obj;
- u32 value;
- };
- struct sw_sync_pt {
- struct sync_pt pt;
- u32 value;
- };
sw_sync_timeline和sw_sync_pt相當簡單,隻不過是在原本的sync_timeline和sync_pt基礎上多加了一個u32的value而已.另外sw_sync_timeline和sync_pt還開出了幾個新的api.
[cpp] view plain copy
- struct sw_sync_timeline *sw_sync_timeline_create(const char *name);
- void sw_sync_timeline_inc(struct sw_sync_timeline *obj, u32 inc);
- struct sync_pt *sw_sync_pt_create(struct sw_sync_timeline *obj, u32 value);
上面這三個api在s3c-fb.c 中都會用到,我們在分析到對應code的時候再來深入分析.
接下來我們進入s3c-fb.c,具體看下sw_sync_timeline,sw_sync_pt和sync_fence是如何使用的.首先,s3c-fb定義了一些和處理fence相關的member
[cpp] view plain copy
- struct s3c_fb {
- ...
- struct fb_info *fbinfo;
- struct list_head update_regs_list;
- struct mutex update_regs_list_lock;
- struct kthread_worker update_regs_worker;
- struct task_struct *update_regs_thread;
- struct kthread_work update_regs_work;
- struct sw_sync_timeline *timeline;
- int timeline_max;
- ...
- }
s3c-fb是把buffer的顯示放在一個單獨的kthread裡面來做,
struct list_head update_regs_list;
struct mutex update_regs_list_lock;
struct kthread_worker update_regs_worker;
struct task_struct *update_regs_thread;
struct kthread_work update_regs_work;
這幾個都是kthread相關的struct,在prob的時候會進行初始化.
[cpp] view plain copy
- static struct platform_driver s3c_fb_driver = {
- .probe = s3c_fb_probe,
- .remove = __devexit_p(s3c_fb_remove),
- .id_table = s3c_fb_driver_ids,
- .driver = {
- .name = "s3c-fb",
- .owner = THIS_MODULE,
- .pm = &s3cfb_pm_ops,
- },
- };
[cpp] view plain copy
- static int __devinit s3c_fb_probe(struct platform_device *pdev)
- {
- ...
- INIT_LIST_HEAD(&sfb->update_regs_list);
- mutex_init(&sfb->update_regs_list_lock);
- init_kthread_worker(&sfb->update_regs_worker);
- sfb->update_regs_thread = kthread_run(kthread_worker_fn,
- &sfb->update_regs_worker, "s3c-fb");
- if (IS_ERR(sfb->update_regs_thread)) {
- int err = PTR_ERR(sfb->update_regs_thread);
- sfb->update_regs_thread = NULL;
- dev_err(dev, "failed to run update_regs thread\n");
- return err;
- }
- init_kthread_work(&sfb->update_regs_work, s3c_fb_update_regs_handler);
- sfb->timeline = sw_sync_timeline_create("s3c-fb");
- sfb->timeline_max = 1;
- ...
- }
從上面這段code可以看出,最終kthread的工作會由s3c_fb_update_regs_handler來完成,有關kthread的具體細節這裡就不詳細讨論了.我們隻來看timeline = sw_sync_timeline_create(),同時timeline_max被初始化為1.
[cpp] view plain copy
- struct sw_sync_timeline *sw_sync_timeline_create(const char *name)
- {
- struct sw_sync_timeline *obj = (struct sw_sync_timeline *)
- sync_timeline_create(&sw_sync_timeline_ops,
- sizeof(struct sw_sync_timeline),
- name);
- return obj;
- }
[cpp] view plain copy
- struct sync_timeline_ops sw_sync_timeline_ops = {
- .driver_name = "sw_sync",
- .dup = sw_sync_pt_dup,
- .has_signaled = sw_sync_pt_has_signaled,
- .compare = sw_sync_pt_compare,
- .fill_driver_data = sw_sync_fill_driver_data,
- .timeline_value_str = sw_sync_timeline_value_str,
- .pt_value_str = sw_sync_pt_value_str,
- };
sw_sync_timeline_create以sw_sync_timeline_ops為參數,構造了一個"基類"sync_timeline_create的結構體,sw_sync_timeline_ops每個函數的作用我們在遇到的時候再來讨論.
當hwc通過S3CFB_WIN_CONFIG ioctl把所有layer的資訊送進fb driver時(詳細過程請參考Android HardwareComposer中的fence機制):
[cpp] view plain copy
- static int s3c_fb_ioctl(struct fb_info *info, unsigned int cmd,
- unsigned long arg)
- {
- ...
- case S3CFB_WIN_CONFIG:
- if (copy_from_user(&p.win_data,
- (struct s3c_fb_win_config_data __user *)arg,
- sizeof(p.win_data))) {
- ret = -EFAULT;
- break;
- }
- ret = s3c_fb_set_win_config(sfb, &p.win_data);
- if (ret)
- break;
- if (copy_to_user((struct s3c_fb_win_config_data __user *)arg,
- &p.win_data,
- sizeof(p.user_ion_client))) {
- ret = -EFAULT;
- break;
- }
- break;
- ...
- }
copy_from_user和copy_to_user讀寫的都是s3c_fb_win_config_data類型的data,隻不過在copy_to_user時寫回user space的data大小變了,因為user space感興趣的隻是一個fenceFd而已.
[cpp] view plain copy
- static int s3c_fb_set_win_config(struct s3c_fb *sfb,
- struct s3c_fb_win_config_data *win_data)
- {
- struct s3c_fb_win_config *win_config = win_data->config;
- int ret = 0;
- unsigned short i;
- struct s3c_reg_data *regs;
- //這個fence就是需要return給user space的retireFence
- struct sync_fence *fence;
- struct sync_pt *pt;
- //fb會和上面的fence綁定
- int fd;
- unsigned int bw = 0;
- fd = get_unused_fd();
- if (fd < 0)
- return fd;
- mutex_lock(&sfb->output_lock);
- regs = kzalloc(sizeof(struct s3c_reg_data), GFP_KERNEL);
- ...
- ret = s3c_fb_set_win_buffer(sfb, win, config, regs);
- ...
- mutex_lock(&sfb->update_regs_list_lock);
- //timeline_max初始值為1
- sfb->timeline_max++;
- //以time_line_max為參數在sw_sync_timeline上建構了一個新的sw_sync_pt
- pt = sw_sync_pt_create(sfb->timeline, sfb->timeline_max);
- //以pt為參數建構一個fence
- fence = sync_fence_create("display", pt);
- //将fence install到file中,以fd表示這個file,之後對于fd的操作其實都是對fence的操作
- sync_fence_install(fence, fd);
- //把fd指派給win_data->fence,win_data->fence将寫回user space
- win_data->fence = fd;
- //buffer data的具體顯示工作交給kthread完成,
- list_add_tail(®s->list, &sfb->update_regs_list);
- mutex_unlock(&sfb->update_regs_list_lock);
- queue_kthread_work(&sfb->update_regs_worker,
- &sfb->update_regs_work);
- mutex_unlock(&sfb->output_lock);
- return ret;
- }
code中關鍵的地方都加了注釋.s3c_fb_set_win_buffer和kthread工作的内容稍後再來展開,我們先來看将要return回user space的這個fence是怎麼生成的.
[cpp] view plain copy
- struct sync_pt *sw_sync_pt_create(struct sw_sync_timeline *obj, u32 value)
- {
- struct sw_sync_pt *pt;
- pt = (struct sw_sync_pt *)
- sync_pt_create(&obj->obj, sizeof(struct sw_sync_pt));
- pt->value = value;
- return (struct sync_pt *)pt;
- }
[cpp] view plain copy
- struct sync_pt *sync_pt_create(struct sync_timeline *parent, int size)
- {
- struct sync_pt *pt;
- if (size < sizeof(struct sync_pt))
- return NULL;
- pt = kzalloc(size, GFP_KERNEL);
- if (pt == NULL)
- return NULL;
- INIT_LIST_HEAD(&pt->active_list);
- kref_get(&parent->kref);
- sync_timeline_add_pt(parent, pt);
- return pt;
- }
sw_sync_pt把value,也就是timeline_max作為pt->value的值儲存下來,接着call了"基類"sync_pt_create的"構造函數".在sync_pt_create中為pt配置設定空間,并把自己加入到timeline的child_list_head中去.這時候,建立立的sw_sync_pt->value == timeline_max.注意,因為pt的buffer是由kzalloc配置設定的,是以pt->status的值是0,status各個值的含義在之前的struct定義中可以看到1: signaled, 0:active, <0: error,是以現在pt的satus是 active狀态.
[cpp] view plain copy
- struct sync_fence *sync_fence_create(const char *name, struct sync_pt *pt)
- {
- struct sync_fence *fence;
- if (pt->fence)
- return NULL;
- //sync_fence_alloc中通過anon_inode_getfile()為fence配置設定一個file,這個file在sync_fence_installde時候會用到
- fence = sync_fence_alloc(name);
- if (fence == NULL)
- return NULL;
- //儲存fence指針到pt->fence
- pt->fence = fence;
- //将pt加入到fence的pt_list_head連結清單中
- list_add(&pt->pt_list, &fence->pt_list_head);
- //将pt加入到timelien的active_list_head連結清單中
- sync_pt_activate(pt);
- //馬上判斷一次pt是否已經處于signal的狀态,如果fence中所有的pt都處于signal狀态,fence就要被signal
- sync_fence_signal_pt(pt);
- return fence;
- }
這裡有兩個關鍵點,sync_pt_activate和sync_fence_signal_pt,我們來各個擊破,先來看sync_pt_activate.
[cpp] view plain copy
- static void sync_pt_activate(struct sync_pt *pt)
- {
- struct sync_timeline *obj = pt->parent;
- unsigned long flags;
- int err;
- spin_lock_irqsave(&obj->active_list_lock, flags);
- err = _sync_pt_has_signaled(pt);
- if (err != 0)
- goto out;
- list_add_tail(&pt->active_list, &obj->active_list_head);
- out:
- spin_unlock_irqrestore(&obj->active_list_lock, flags);
- }
在spin_lock的保護下去call了_sync_pt_has_signaled,如果err==0,也就是pt的status是activate,就把pt加入到timeline的active_list_head中去.我們來看看_sync_pt_has_signaled裡面是如何判斷pt的status的.
[cpp] view plain copy
- static int _sync_pt_has_signaled(struct sync_pt *pt)
- {
- int old_status = pt->status;
- //call到了parent->ops的has_signaled,也就是sw_sync_timeline->ops->has_signaled
- //我們還記得在sw_sync_timeline_create的時候使用的ops參數是sw_sync_timeline_ops
- //其中.has_signaled = sw_sync_pt_has_signaled,
- if (!pt->status)
- pt->status = pt->parent->ops->has_signaled(pt);
- if (!pt->status && pt->parent->destroyed)
- pt->status = -ENOENT;
- if (pt->status != old_status)
- pt->timestamp = ktime_get();
- return pt->status;
- }
[cpp] view plain copy
- static int sw_sync_pt_has_signaled(struct sync_pt *sync_pt)
- {
- struct sw_sync_pt *pt = (struct sw_sync_pt *)sync_pt;
- struct sw_sync_timeline *obj =
- (struct sw_sync_timeline *)sync_pt->parent;
- return sw_sync_cmp(obj->value, pt->value) >= 0;
- }
[cpp] view plain copy
- static int sw_sync_cmp(u32 a, u32 b)
- {
- if (a == b)
- return 0;
- return ((s32)a - (s32)b) < 0 ? -1 : 1;
- }
sw_sync_cmp比較sw_sync_timeline的value和sync_pt的value,我們之前分析過,sw_sync_timeline create的時候value為0,timeline_max初始值為1,在sw_sync_pt_create之前,先執行了timeline_max++,是以這個時候timeline_max的值是2,也就是sw_sync_pt的value為2.這裡要注意的是,sw_sync_pt_has_signaled傳回的不是sw_sync_cmp的傳回值,而是它的傳回值與0比較的結果 return sw_sync_cmp(obj->value, pt->value) >= 0;
如果兩者相等,sw_sync_cmp return 0,sw_sync_tp_has_signaled傳回1.
如果timeline的value大于pt的value,sw_sync_cmp return 1,sw_sync_tp_has_signaled傳回1.
如果timeline的value小于pt的value,sw_sync_cmp return -1,sw_sync_tp_has_signaled傳回0.
因為我們的timeline->value == 0 pt->value == 2,是以這裡sw_sync_tp_has_signaled傳回的是0,也就是說pt處于activate狀态,不處于signaled狀态,pt不需要加入到timeline的activate_list_head清單中去.
分析完第一個關鍵點sync_pt_activate,我們接下來分析第二個關鍵點sync_fence_signal_pt
[cpp] view plain copy
- static void sync_fence_signal_pt(struct sync_pt *pt)
- {
- LIST_HEAD(signaled_waiters);
- struct sync_fence *fence = pt->fence;
- struct list_head *pos;
- struct list_head *n;
- unsigned long flags;
- int status;
- //這個函數需要進去分析
- status = sync_fence_get_status(fence);
- spin_lock_irqsave(&fence->waiter_list_lock, flags);
- //如果status不為0,也就是fence處于signaled或者error狀态,那麼在spin_lock的保護下,
- //把fence的waiter_list_header中的sync_fence_waiter移動到signaled_waiters list中去
- if (status && !fence->status) {
- list_for_each_safe(pos, n, &fence->waiter_list_head)
- list_move(pos, &signaled_waiters);
- //更新fence的status
- fence->status = status;
- } else {
- status = 0;
- }
- spin_unlock_irqrestore(&fence->waiter_list_lock, flags);
- if (status) {
- //周遊signaled_waiters,把每個waiter移除list,并call他們的callback
- list_for_each_safe(pos, n, &signaled_waiters) {
- struct sync_fence_waiter *waiter =
- container_of(pos, struct sync_fence_waiter,
- waiter_list);
- list_del(pos);
- waiter->callback(fence, waiter);
- }
- //wake up wait_queue_head_t
- wake_up(&fence->wq);
- }
- }
其他的部分code中已經有注釋,我們來看sync_fence_get_status
[cpp] view plain copy
- static int sync_fence_get_status(struct sync_fence *fence)
- {
- struct list_head *pos;
- //預設預設值是signaled
- int status = 1;
- list_for_each(pos, &fence->pt_list_head) {
- struct sync_pt *pt = container_of(pos, struct sync_pt, pt_list);
- int pt_status = pt->status;
- if (pt_status < 0) {
- //如果有一個pt的status是error,則整個fence的status就是error
- status = pt_status;
- break;
- } else if (status == 1) {
- //如果有一個pt的status是activate,則覆寫掉預設值,繼續周遊直到遇到error或者到list尾部.
- status = pt_status;
- }
- }
- return status;
- }
因為我們的pt現在處于activate狀态,而fence中隻有這一個pt,是以fence也處于activate狀态.sync_fence_create分析完畢,我們來看sync_fence_install的過程.
[cpp] view plain copy
- void sync_fence_install(struct sync_fence *fence, int fd)
- {
- fd_install(fd, fence->file);
- }
簡單的通過fd_install把fence_file和fd關聯起來.
至此我們建立了一個sync_pt,并把pt加入到timeline,之後用這個pt建立了一個sync_fence,time_line中最新的pt->value = timeline->value+2.接下來fb driver把fence對應的fd寫回user space,retireFence的處理也就告一段落了,至于retireFence何時被signal,在我們稍後分析kthread處理buffer data的時候就會揭曉.
之前在分析s3c_fb_set_win_config的時候我們提到過:"s3c_fb_set_win_buffer和kthread工作的内容稍後再來展開",下面我們就來分析s3c_fb_set_win_buffer.
[cpp] view plain copy
- static int s3c_fb_set_win_config(struct s3c_fb *sfb,
- struct s3c_fb_win_config_data *win_data)
- {
- ...
- ret = s3c_fb_set_win_buffer(sfb, win, config, regs);
- ...
- }
[cpp] view plain copy
- static int s3c_fb_set_win_buffer(struct s3c_fb *sfb, struct s3c_fb_win *win,
- struct s3c_fb_win_config *win_config, struct s3c_reg_data *regs)
- {
- struct ion_handle *handle;
- struct fb_var_screeninfo prev_var = win->fbinfo->var;
- struct s3c_dma_buf_data dma_buf_data;
- if (win_config->fence_fd >= 0) {
- //如果從hwc傳下來的win_config->fence_fd>=0,則通過sync_fence_fdget擷取到它對應的sync_fence
- dma_buf_data.fence = sync_fence_fdget(win_config->fence_fd);
- if (!dma_buf_data.fence) {
- dev_err(sfb->dev, "failed to import fence fd\n");
- ret = -EINVAL;
- goto err_offset;
- }
- }
- //dma_buf_data連同剛剛擷取的fence一起儲存在regs中.
- regs->dma_buf_data[win_no] = dma_buf_data;
- return 0;
- }
[cpp] view plain copy
- struct sync_fence *sync_fence_fdget(int fd)
- {
- struct file *file = fget(fd);
- if (file == NULL)
- return NULL;
- if (file->f_op != &sync_fence_fops)
- goto err;
- return file->private_data;
- err:
- fput(file);
- return NULL;
- }
regs會儲存到update_regs_list中,最終由kthread在s3c_fb_update_regs_handler函數中處理.其中regs->dma_buf_data[i].fence就是hwc中的acquireFence.
[cpp] view plain copy
- static void s3c_fb_update_regs_handler(struct kthread_work *work)
- {
- struct s3c_fb *sfb =
- container_of(work, struct s3c_fb, update_regs_work);
- struct s3c_reg_data *data, *next;
- struct list_head saved_list;
- mutex_lock(&sfb->update_regs_list_lock);
- saved_list = sfb->update_regs_list;
- list_replace_init(&sfb->update_regs_list, &saved_list);
- mutex_unlock(&sfb->update_regs_list_lock);
- list_for_each_entry_safe(data, next, &saved_list, list) {
- //每處理一個update_regs_list中的regs,就把它從list中移除
- s3c_fb_update_regs(sfb, data);
- list_del(&data->list);
- kfree(data);
- }
- }
[cpp] view plain copy
- static void s3c_fb_update_regs(struct s3c_fb *sfb, struct s3c_reg_data *regs)
- {
- for (i = 0; i < sfb->variant.nr_windows; i++) {
- old_dma_bufs[i] = sfb->windows[i]->dma_buf_data;
- //這裡會等待acquireFence被signal,需要進去看下
- if (regs->dma_buf_data[i].fence)
- s3c_fd_fence_wait(sfb, regs->dma_buf_data[i].fence);
- }
- //具體顯示相關,就不展開了
- __s3c_fb_update_regs(sfb, regs);
- //這裡很重要,也需要進去看下
- sw_sync_timeline_inc(sfb->timeline, 1);
- //釋放上一個cycle配置設定的buffer,裡面會call到sync_fence_put(dma->fence),也需要看一下.
- for (i = 0; i < sfb->variant.nr_windows; i++)
- s3c_fb_free_dma_buf(sfb, &old_dma_bufs[i]);
- }
注釋中标出了3個需要注意的地方
1. s3c_fd_fence_wait
2. sw_sync_timeline_inc
3. sync_fence_put
我們先看1. s3c_fd_fence_wait
[cpp] view plain copy
- static void s3c_fd_fence_wait(struct s3c_fb *sfb, struct sync_fence *fence)
- {
- int err = sync_fence_wait(fence, 1000);
- if (err >= 0)
- return;
- if (err == -ETIME)
- err = sync_fence_wait(fence, 10 * MSEC_PER_SEC);
- if (err < 0)
- dev_warn(sfb->dev, "error waiting on fence: %d\n", err);
- }
兩次call到sync_fence_wait,隻是參數不同而已,這裡可以看出如果wait不到,buffer data的處理還是要繼續的,隻是報了個warn.
[cpp] view plain copy
- int sync_fence_wait(struct sync_fence *fence, long timeout)
- {
- int err = 0;
- if (timeout > 0) {
- timeout = msecs_to_jiffies(timeout);
- //等待sync_fence_check的結果
- err = wait_event_interruptible_timeout(fence->wq,
- sync_fence_check(fence),
- timeout);
- } else if (timeout < 0) {
- err = wait_event_interruptible(fence->wq,
- sync_fence_check(fence));
- }
- return 0;
- }
[cpp] view plain copy
- static bool sync_fence_check(struct sync_fence *fence)
- {
- //對status的讀取放在了read barrier記憶體屏障之後
- smp_rmb();
- return fence->status != 0;
- }
從這裡看出,s3c_fd_fence_wait隻是在等待fence->status狀态變成非0,1是signaled,-1是error.
我們接着來看2.sw_sync_timeline_inc
[cpp] view plain copy
- void sw_sync_timeline_inc(struct sw_sync_timeline *obj, u32 inc)
- {
- obj->value += inc;
- sync_timeline_signal(&obj->obj);
- }
sw_sync_timeline的value增加了inc,我們的情況是+1,如果是第一次進來,timeline中隻有一個pt,它的value是2,timeline的value是1.之後call到sync_timeline_signal.經過n個cycle後,value最大的pt的value是n+1,timeline的value是n.
[cpp] view plain copy
- void sync_timeline_signal(struct sync_timeline *obj)
- {
- unsigned long flags;
- LIST_HEAD(signaled_pts);
- struct list_head *pos, *n;
- spin_lock_irqsave(&obj->active_list_lock, flags);
- //在spin_lock的保護下,把被判斷為處于signaled狀态的pt從activa_list_header中移除
- //添加到signaled_list中去
- list_for_each_safe(pos, n, &obj->active_list_head) {
- struct sync_pt *pt =
- container_of(pos, struct sync_pt, active_list);
- if (_sync_pt_has_signaled(pt)) {
- list_del_init(pos);
- list_add(&pt->signaled_list, &signaled_pts);
- kref_get(&pt->fence->kref);
- }
- }
- spin_unlock_irqrestore(&obj->active_list_lock, flags);
- list_for_each_safe(pos, n, &signaled_pts) {
- struct sync_pt *pt =
- container_of(pos, struct sync_pt, signaled_list);
- list_del_init(pos);
- //每個處于signaled狀态的pt都要call一次sync_fence_signal_pt,
- //來判斷它所屬的fence是否需要被signal.
- sync_fence_signal_pt(pt);
- kref_put(&pt->fence->kref, sync_fence_free);
- }
- }
_sync_pt_has_signaled之前已經分析過了,在第n個cycle的時候,value值是n-1的pt被signal.
分析完sw_sync_timeline_inc,我們接着來看 3. sync_fence_put
[cpp] view plain copy
- void sync_fence_put(struct sync_fence *fence)
- {
- fput(fence->file);
- }
隻有一句話,非常簡單!
到這裡,fb driver中fence的機制就分析完了,總結一下:
1. fb driver建構一個sw_sync_timeline.sw_sync_timeline中的每個pt都有一個value,通過比較timeline->value和pt->value判斷哪個pt需要被signal
2. 每次需要建構一個fence的時候,先在timeline上建構一個sw_sync_pt,這個pt的value值是遞增的.再由sw_sync_pt建構sync_fence.
3. 當sw_sync_timeline_inc(struct sw_sync_timeline *obj, u32 inc)的時候,timeline->增加inc,馬上判斷timeline中哪些pt被singal,接着判斷這些pt所屬的fence是否被signal.
4. 當fence使用完畢時,通過sync_fence_put釋放fence.
另外,sw_sync還可以通過open dev, ioctrl的方式來操作fence.
假如我們的hwc沒有現成的ioctl可以用,又沒有辦法改到driver的code,hwc可以打開/dev/sw_sync裝置,通過一系列的ioctl來監控和控制fence.