天天看点

可分离卷积

先看一下nn.Conv2d()代码:

输入:(N,C_in,H_in,W_in)

输出:(N,C_out,H_out,W_out)

dilation:空洞卷积率,控制卷积膨胀间隔

groups:分组卷积来控制输入和输出的连接方式,in_channels和out_channels都要被groups整除。

groups设置不同时,可区分为分组卷积或深度可分离卷积:

  • 当groups=1时,表示普通卷积
  • 当groups<in_channels时,表示普通的分组卷积
  • 当groups=in_channels时,表示深度可分离卷积,每个通道都有一组自己的滤波器,大小为:out_channels/in_channels

可分离卷积

可分离卷积:空间可分离卷积和深度可分离卷积

假设feature的size为[channel,height,width]

  • 空间也就是指:[height,width]这两个维度组成的
  • 深度也就是指:channel这一维度

1、空间可分离卷积

空间可分离卷积:将n×n的卷积分成1×n和n×1两步计算

  • 普通的3×3卷积在一个5×5的feature map上的计算方式如下图,需9×9次乘法。
    可分离卷积
  • 空间可分离卷积计算方式如下图,(1)先使用3×1的filter,所需计算量为15×3;(2)使用过1×3的filter,所需计算量为9×3;总共需要72次,小于普通卷积的81次乘法。
    可分离卷积

2、深度可分离卷积

  • Conventional Convolution

    假设输入层为64×64像素的RGB图像。

    常规卷积:每个卷积核是同时操作输入图片的每个通道。

    卷积层的参数数量为4×3×3×3

    可分离卷积
  • Depthwise Convolution

    Depthwise Convolution:一个卷积核负责一个通道,卷积核的数量与上一层的通道数相同(通道和卷积核一一对应)。

    卷积层参数数量为3×3×3

    可分离卷积

    缺点:

    (1)无法扩展Feature map的通道数。

    (2)未有效利用不同通道在相同空间位置上的feature信息。

  • Pointwise Convolution

    Pointwise Convolution:与常规卷积运算相似,但卷积核尺寸为 1×1×M,M为上一层的通道数。

    将上一步的map在深度方向上进行加权组合,生成新的Feature map。有几个卷积核就有几个输出Feature map。

    卷积层参数数量为1×1×3×4

    可分离卷积

综上,我们比对一下:常规卷积的参数个数为108;深度可分离卷积的参数个数为39,参数个数是常规卷积的约1/3。 下面我们用代码来验证一下!

代码测试

1、普通卷积、深度可分离卷积对比:

  • 整体对比:

    假设存在这样一个场景,上一层有一个64×64大小,3通道的特征图,需要经过卷积操作,输出4个通道的特征图,并且要求尺寸不改变。我们可以对比一下采用常规卷积和深度可分离卷积参数量各是多少。

import torch.nn as nn
from torchsummary import summary

class normal_conv(nn.Module):
    def __init__(self, in_channels, out_channels):
        super(normal_conv, self).__init__()
        self.conv = nn.Conv2d(in_channels,
                              out_channels,
                              kernel_size=3,
                              stride=1,
                              padding=1,
                              bias=True)

    def forward(self, x):
        return self.conv(x)


class sep_conv(nn.Module):
    def __init__(self, in_channels, out_channels):
        super(sep_conv, self).__init__()
        self.deepthwise_conv = nn.Conv2d(in_channels,
                                         in_channels,
                                         kernel_size=3,
                                         stride=1,
                                         padding=1,
                                         bias=True,
                                         groups=in_channels)
        self.pointwise_conv = nn.Conv2d(in_channels,
                                        out_channels,
                                        kernel_size=1,
                                        stride=1,
                                        padding=0,
                                        bias=True,
                                        groups=1)
    def forward(self,x):
        d = self.deepthwise_conv(x)
        p = self.pointwise_conv(d)
        return p

input_size = (3,64,64)

conv1 = normal_conv(3,4)
conv2 = sep_conv(3,4)

print("使用常规卷积所需要的参数:")
print(summary(conv1,input_size,batch_size=1))
print("使用深度可分离卷积所需要的参数:")
print(summary(conv2,input_size,batch_size=1))
           

输出结果:

使用常规卷积所需要的参数:
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1             [1, 4, 64, 64]             112
================================================================
Total params: 112
Trainable params: 112
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.05
Forward/backward pass size (MB): 0.12
Params size (MB): 0.00
Estimated Total Size (MB): 0.17
----------------------------------------------------------------
None
使用深度可分离卷积所需要的参数:
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1             [1, 3, 64, 64]              30
            Conv2d-2             [1, 4, 64, 64]              16
================================================================
Total params: 46
Trainable params: 46
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.05
Forward/backward pass size (MB): 0.22
Params size (MB): 0.00
Estimated Total Size (MB): 0.27
----------------------------------------------------------------
None
           
  • 各部分对比:
import torch.nn as nn
import torch
from torchsummary import summary 
class Conv_test(nn.Module):
    def __init__(self, in_ch, out_ch, kernel_size, padding, groups):
        super(Conv_test, self).__init__()
        self.conv = nn.Conv2d(
            in_channels=in_ch,
            out_channels=out_ch,
            kernel_size=kernel_size,
            stride=1,
            padding=padding,
            groups=groups,
            bias=False
        )

    def forward(self, input):
        out = self.conv(input)
        return out
           
#标准的卷积层,输入的是3x64x64,目标输出4个feature map
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
conv = Conv_test(3, 4, 3, 1, 1).to(device)
print(summary(conv,  input_size=(3, 64, 64)))
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1            [-1, 4, 64, 64]             108
================================================================
Total params: 108
Trainable params: 108
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.05
Forward/backward pass size (MB): 0.12
Params size (MB): 0.00
Estimated Total Size (MB): 0.17
----------------------------------------------------------------
None
           
# 逐深度卷积层,输入同上
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
conv = Conv_test(3, 3, 3, padding=1, groups=3).to(device)
print(summary(conv,  input_size=(3, 64, 64)))
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1            [-1, 3, 64, 64]              27
================================================================
Total params: 27
Trainable params: 27
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.05
Forward/backward pass size (MB): 0.09
Params size (MB): 0.00
Estimated Total Size (MB): 0.14
----------------------------------------------------------------
None
           
# 逐点卷积层,输入即逐深度卷积的输出大小,目标输出也是4个feature map
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
conv = Conv_test(3, 4, kernel_size=1, padding=0, groups=1).to(device)
print(summary(conv,  input_size=(3, 64, 64)))
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1            [-1, 4, 64, 64]              12
================================================================
Total params: 12
Trainable params: 12
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.05
Forward/backward pass size (MB): 0.12
Params size (MB): 0.00
Estimated Total Size (MB): 0.17
----------------------------------------------------------------
None
           

2、分组卷积、深度可分离卷积对比:

  • 普通卷积:总参数量是 4x8x3x3=288。
  • 分组卷积:假设输入层为一个大小为64×64像素的彩色图片、in_channels=4,out_channels=8,经过2组卷积层,最终输出8个Feature

    Map,我们可以计算出卷积层的参数数量是 2x8x3x3=144。

  • 深度可分离卷积:逐深度卷积的卷积数量是 4x3x3=36, 逐点卷积卷积数量是 1x1x4x8=32,总参数量为68。
#普通卷积层
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
conv = Conv_test(4, 8, 3, padding=1, groups=1).to(device)
print(summary(conv,  input_size=(4, 64, 64)))
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1            [-1, 8, 64, 64]             288
================================================================
Total params: 288
Trainable params: 288
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.06
Forward/backward pass size (MB): 0.25
Params size (MB): 0.00
Estimated Total Size (MB): 0.31
----------------------------------------------------------------
None
           
# 分组卷积层,输入的是4x64x64,目标输出8个feature map
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
conv = Conv_test(4, 8, 3, padding=1, groups=2).to(device)
print(summary(conv,  input_size=(4, 64, 64)))
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1            [-1, 8, 64, 64]             144
================================================================
Total params: 144
Trainable params: 144
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.06
Forward/backward pass size (MB): 0.25
Params size (MB): 0.00
Estimated Total Size (MB): 0.31
----------------------------------------------------------------
None
           
# 逐深度卷积层,输入同上
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
conv = Conv_test(4, 4, 3, padding=1, groups=4).to(device)
print(summary(conv,  input_size=(4, 64, 64)))
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1            [-1, 4, 64, 64]              36
================================================================
Total params: 36
Trainable params: 36
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.06
Forward/backward pass size (MB): 0.12
Params size (MB): 0.00
Estimated Total Size (MB): 0.19
----------------------------------------------------------------
None
           
# 逐点卷积层,输入即逐深度卷积的输出大小,目标输出也是8个feature map
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
conv = Conv_test(4, 8, kernel_size=1, padding=0, groups=1).to(device)
print(summary(conv,  input_size=(4, 64, 64)))
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1            [-1, 8, 64, 64]              32
================================================================
Total params: 32
Trainable params: 32
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.06
Forward/backward pass size (MB): 0.25
Params size (MB): 0.00
Estimated Total Size (MB): 0.31
----------------------------------------------------------------
None
           

继续阅读