天天看點

python batchnorm2d_PyTorch 卷積與BatchNorm的融合

2020-05-27 更新PyTorch已經官方支援了合并操作:Captain Jack:MergeBN && Quantization PyTorch 官方解決方案​zhuanlan.zhihu.com

python batchnorm2d_PyTorch 卷積與BatchNorm的融合

2. 有使用者爸爸/媽媽(我是講女權的)在用我的這套代碼的時候出現了各種錯誤,如果還是打算用這套,我将最新版同步到了github上,後面也會不定期同步:https://github.com/qinjian623/pytorch_toys/blob/master/post_quant/fusion.py​github.com

原文:2018-11-11(本文最後一次更新的時間,神tm的日子...)

融合Conv和BatchNorm是個很基本的優化提速方法,很多架構應該都提供了功能。自己因為一個Weekend Project的需求,需要在PyTorch的Python裡直接這個事情給做了。

這個融合優化屬于經濟上淨賺的事情,精度理論上無損(實際上有損,但是很小,既然都提速了,八成要弄量化,這個精度掉的更誇張),速度有大幅度提升,尤其是BN層接的特别多的情況。

融合原理

卷積的工作:

python batchnorm2d_PyTorch 卷積與BatchNorm的融合

BN的工作:

python batchnorm2d_PyTorch 卷積與BatchNorm的融合

帶入的話可以推出來,融合後的新卷積:

python batchnorm2d_PyTorch 卷積與BatchNorm的融合
python batchnorm2d_PyTorch 卷積與BatchNorm的融合

新的卷積就直接順路完成BN的工作。

測試結果:

在我的筆記本上的測試,CPU版本應該是同步的吧,否則這個結果也是不靠譜的,當然這個結果也不是嚴肅結果,沒平均,沒熱機。不過能定性說明問題就OK,機關是秒。

python batchnorm2d_PyTorch 卷積與BatchNorm的融合

import torch

import torch.nn as nn

import torchvision as tv

class DummyModule(nn.Module):

def __init__(self):

super(DummyModule, self).__init__()

def forward(self, x):

# print("Dummy, Dummy.")

return x

def fuse(conv, bn):

w = conv.weight

mean = bn.running_mean

var_sqrt = torch.sqrt(bn.running_var + bn.eps)

beta = bn.weight

gamma = bn.bias

if conv.bias is not None:

b = conv.bias

else:

b = mean.new_zeros(mean.shape)

w = w * (beta / var_sqrt).reshape([conv.out_channels, 1, 1, 1])

b = (b - mean)/var_sqrt * beta + gamma

fused_conv = nn.Conv2d(conv.in_channels,

conv.out_channels,

conv.kernel_size,

conv.stride,

conv.padding,

bias=True)

fused_conv.weight = nn.Parameter(w)

fused_conv.bias = nn.Parameter(b)

return fused_conv

def fuse_module(m):

children = list(m.named_children())

c = None

cn = None

for name, child in children:

if isinstance(child, nn.BatchNorm2d):

bc = fuse(c, child)

m._modules[cn] = bc

m._modules[name] = DummyModule()

c = None

elif isinstance(child, nn.Conv2d):

c = child

cn = name

else:

fuse_module(child)

def test_net(m):

p = torch.randn([1, 3, 224, 224])

import time

s = time.time()

o_output = m(p)

print("Original time: ", time.time() - s)

fuse_module(m)

s = time.time()

f_output = m(p)

print("Fused time: ", time.time() - s)

print("Max abs diff: ", (o_output - f_output).abs().max().item())

assert(o_output.argmax() == f_output.argmax())

# print(o_output[0][0].item(), f_output[0][0].item())

print("MSE diff: ", nn.MSELoss()(o_output, f_output).item())

def test_layer():

p = torch.randn([1, 3, 112, 112])

conv1 = m.conv1

bn1 = m.bn1

o_output = bn1(conv1(p))

fusion = fuse(conv1, bn1)

f_output = fusion(p)

print(o_output[0][0][0][0].item())

print(f_output[0][0][0][0].item())

print("Max abs diff: ", (o_output - f_output).abs().max().item())

print("MSE diff: ", nn.MSELoss()(o_output, f_output).item())

m = tv.models.resnet152(True)

m.eval()

print("Layer level test: ")

test_layer()

print("============================")

print("Module level test: ")

m = tv.models.resnet18(True)

m.eval()

test_net(m)