天天看点

(有源码)part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读

part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读

https://blog.csdn.net/xuluohongshang/article/details/79036440 xuluohongshang 阅读数:2032

<span class="tags-box artic-tag-box">
							<span class="label">标签:</span>
															<a data-track-click="{&quot;mod&quot;:&quot;popu_626&quot;,&quot;con&quot;:&quot;partAlign&quot;}" class="tag-link" href="http://so.csdn.net/so/search/s.do?q=partAlign&amp;t=blog" target="_blank" rel="external nofollow"  target="_blank">partAlign																</a><a data-track-click="{&quot;mod&quot;:&quot;popu_626&quot;,&quot;con&quot;:&quot;AlignReID&quot;}" class="tag-link" href="http://so.csdn.net/so/search/s.do?q=AlignReID&amp;t=blog" target="_blank" rel="external nofollow"  target="_blank">AlignReID																</a><a data-track-click="{&quot;mod&quot;:&quot;popu_626&quot;,&quot;con&quot;:&quot;行人重识别&quot;}" class="tag-link" href="http://so.csdn.net/so/search/s.do?q=行人重识别&amp;t=blog" target="_blank" rel="external nofollow"  target="_blank">行人重识别																</a><a data-track-click="{&quot;mod&quot;:&quot;popu_626&quot;,&quot;con&quot;:&quot;论文笔记&quot;}" class="tag-link" href="http://so.csdn.net/so/search/s.do?q=论文笔记&amp;t=blog" target="_blank" rel="external nofollow"  target="_blank">论文笔记																</a>
						<span class="article_info_click">更多</span></span>
																				<div class="tags-box space">
							<span class="label">个人分类:</span>
															<a class="tag-link" href="https://blog.csdn.net/xuluohongshang/article/category/7001410" target="_blank" rel="external nofollow"  target="_blank">行人重识别																</a>
						</div>
																							</div>
			<div class="operating">
													</div>
		</div>
	</div>
</div>
<article class="baidu_pl">
	<div id="article_content" class="article_content clearfix csdn-tracking-statistics" data-pid="blog" data-mod="popu_307" data-dsm="post">
							<div class="article-copyright">
				版权声明:本文为博主原创文章,未经博主允许不得转载。转载请保留出处					https://blog.csdn.net/xuluohongshang/article/details/79036440				</div>
							            <div id="content_views" class="markdown_views prism-atom-one-dark">
						<!-- flowchart 箭头图标 勿删 -->
						<svg xmlns="http://www.w3.org/2000/svg" style="display: none;"><path stroke-linecap="round" d="M5,0 0,2.5 5,5z" id="raphael-marker-block" style="-webkit-tap-highlight-color: rgba(0, 0, 0, 0);"></path></svg>
						<p><strong>AlignedReID- Surpassing Human-Level Performance in Person Re-ID</strong> <br>
           

论文信息:

(有源码)part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读

在本文中,我们提出了一种名为AlignedReID的新方法,它提取了一个与局部特征共同学习的全局特征。全局特征学习大大受益于局部特征学习,其通过计算两组局部特征之间的最短路径来执行对准/匹配,而不需要额外的监督。在联合学习之后,我们只保留全局特征来计算图像之间的相似性。我们的方法在Market1501上获得了94.0%的一级准确率,在CUHK03上达到了96.1%,大幅超越了最先进的方法。我们还评估人类的表现,并证明我们的方法是第一个超越Market1501和CUHK03这两个广泛使用的Person ReID数据集的人类表现。

Market1501、CUHK03、MARS、CUHK-SYSU四个数据集上得到了惊艳的性能!

Resnet50 and Resnet50-Xception 作为基模型。

网友的一个复现代码分享:

https://github.com/huanghoujing/AlignedReID-Re-Production-Pytorch

论文解析:

作者提出了一个AlignedReID网络,如下:

(有源码)part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读

训练时的输入:N个图像(一个batch,作者用的N=128,每个ID四个图像),会形成两个N*N的ID距离矩阵

图中显示的为N个图像的特征提取过程,图像输入的数量N,全局特征提取后采用L2距离得到度量N个图像L2距离相似度矩阵,而上分支得到N个图像,每个图像7个条纹。每个条纹各特征向量长度为128,采用动态规划由上及下匹配来发现parts特征对齐下的最小总距离,每对样本对计算最小总距离然后也形成一个N*N的最小总距离矩阵,这个最小总距离定义为两个图像的最短路径如下图,从(1,1)到(7,7)的距离和,直觉上可以知道当一对相似图像会有更短的最小总距离,

(有源码)part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读

Selecting suitable samples for the training model through hard mining has been shown to be effective.

这两个矩阵一起被应用于triplet hard loss!!(这个loss采用自In defense of the triplet loss for person re-identification这篇论文),从而对全局特征与局部特征进行联合训练。而global距离矩阵还被用来指导选择hard的三元组采样 by hard sample mining according to global distances(之所以只用全局距离而不用局部距离,主要是高效,且用两个距离一起的话并没重要差异)

可以知道,对于ID相同的两个行人得到的最短总距离,作者采用的某一part到另一part的距离计算采用了L2距离的指数形式,从而使得non-corresponding alignment has a large L2 distance, and its gradient is close to zero (这个处理很好!)

i.e., the local distance between two images, is mostly determined by the corresponding alignments.

度量学习:度量学习除了采用triplet hard loss,作者还采用了互学习的loss来增强性能,可参考Deep mutual learning这篇论文。另外考虑了Combining softmax loss with metric learning loss to speed up the convergence is also a popular

method ,所以作者整合分类loss和度量loss:度量学习整体框架,如图:

(有源码)part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读

整体的loss包括了度量loss,度量互助loss,分类loss和分类互助loss,度量loss如前面的样本挖掘框架中,由全局距离和局部距离一起计算决定,而互助loss仅由全局距离决定,其中分类互助loss注意在参考的互助学习文献和本文中都用是KL散度loss。

Mutual Learning: presents a deep mutual learning strategy where an ensemble of students learn collaboratively and teach each other throughout the training process.

一个好的模型通常都是采用迁移学习的方法:预训练一个模型然后在进行微调获得自己的模型。这篇论文同时训练多个模型,并让它们相互学习

对齐学习:作者在resnet pool5 7*7 那里,分两路,上面那个对齐学习分支,均匀part分割,再用动态规划,形成N*N距离矩阵(即水平的条纹区域各有N个,每一个为)。对于动态规划中要求的最短路径,用最短路径来描述。局部特征距离是指通过动态规划的方法求出的最短路径,并通过该最短距离找到对齐的局部特征。

训练过程中, 特征的学习分两个分支,下面的分支学习全局特征,而上面的分支采用对特征图的水平均匀分割后各part提取特征,学习特征对齐的过程,上面的分支学习过程中并不需要额外的标注信息

测试的过程中,去掉上面的分支,直接提取全局特征,这时候的特征提取能自动进行各部分的对齐,这种基于局部特征学习得到的全局特征具有更好的ID辨别和匹配性能,对遮挡、检测框偏检(如缺下半身)和多检(如一个probe尺度大行人占据基本整个裁剪框空间,而另一个检测框尺度小且包含了很多的背景)造成的两图像同一位置的part misalign,另外对于不同ID却有相似的表观也具有一定的识别力提升boosting!

所以研究part aligned极具价值的!!

实验:作者还考虑应用re-ranking增强性能

实验中用到了Resnet50 and Resnet50-Xception (Resnet-X) pre-trained on ImageNet [28] as the base models.

通过运用动态对准和协同学习,然后再重新排序,在两个最为常用的ReID测试集Market1501和CUHK03上的首位命中率达到了94.0%和96.1%。据了解,这也是首次机器在行人再识别问题上超越人类专家表现,创下了业界纪录。

关键点:

1)对齐 (8%)

2)mutual learning (3%)

3)classification loss, hard triplet同时

4)re-ranking (5~6%)

(1). some typical results of the alignment

(有源码)part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读

(2). Compare our AlignedReID with a Baseline without local feature branch

The local feature branch helps the network focus on useful image regions and discriminates similar person images with subtle differences

(有源码)part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读

但是作者发现一个不好解释的现象:

if we apply the local distance together with the global distance in the inference stage, rank-1 accuracy further improves approximately 0.3% ∼ 0.5%. However, it is time consuming and not practical when searching in a large gallery. Hence, we recommend using the global feature only

(3). Analysis of Mutual Learning

(有源码)part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读

(4). 在各数据集上与其他方法的比较Comparison with Other Methods

其中:RK是代表采用了Re-Ranking

(有源码)part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读
(有源码)part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读

(5). 与人类性能的比较:

(有源码)part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读

(6). 给出了人类准确率低于AlignedReID的猜测:

First, the annotator usually summarizes some attributes, such as gender, age, and etc., to decide whether the imges contain the same person. Howeverm the summarized attributes might be incorrect.

Second, color bias exists between cameras, and it could make the same person looks differently in the query and ground truth images such as in (c).

Last, different camera angles and human poses might mislead the judgement of body shapes.

如图,人类往往相对于本模型更容易出现的错误:

(有源码)part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读part-aligned系列论文:1711.AlignedReID- Surpassing Human-Level Performance in Person Re-Id 论文阅读

总结:

1.an implicit alignment of local features can substantially improve global feature learning.

2.the end-to-end learning with structure prior is more powerful than a “blind” end-to-end learning.

<script>
					(function(){
						function setArticleH(btnReadmore,posi){
							var winH = $(window).height();
							var articleBox = $("div.article_content");
							var artH = articleBox.height();
							if(artH > winH*posi){
								articleBox.css({
									'height':winH*posi+'px',
									'overflow':'hidden'
								})
								btnReadmore.click(function(){
									if(typeof window.localStorage === "object" && typeof window.csdn.anonymousUserLimit === "object"){
										if(!window.csdn.anonymousUserLimit.judgment()){
											window.csdn.anonymousUserLimit.Jumplogin();
											return false;
										}else if(!currentUserName){
											window.csdn.anonymousUserLimit.updata();
										}
									}
									
									articleBox.removeAttr("style");
									$(this).parent().remove();
								})
							}else{
								btnReadmore.parent().remove();
							}
						}
						var btnReadmore = $("#btn-readmore");
						if(btnReadmore.length>0){
							if(currentUserName){
								setArticleH(btnReadmore,3);
							}else{
								setArticleH(btnReadmore,1.2);
							}
						}
					})()
				</script>
				</article>
           

继续阅读