相關文章:
【深度學習項目一】全連接配接神經網絡實作mnist數字識别
【深度學習項目二】卷積神經網絡LeNet實作minst數字識别
【深度學習項目三】ResNet50多分類任務【十二生肖分類】
『深度學習項目四』基于ResNet101人臉特征點檢測
項目連結:https://aistudio.baidu.com/aistudio/projectdetail/1930877
1. 卷積神經網絡簡介
1.1 AlexNet
貢獻:
- 引入ReLU作為激活函數
- Dropout層
- Max Pooling
- GPU加速
- 資料增強(截取、水準翻轉)
1.2 VGG
1.3 GoogleNet
全連接配接層對輸入輸出大小有限制,用池化層代替沒有限制。
1.4 ResNet
- 殘差結構解決梯度消失問題,多個路徑前向傳播。
- 層數改變如圖左下角,主要是為了減少計算開銷,既減少參數。
2. 資料集介紹
按照12生肖在網上”下載下傳的12種動物照片
訓練樣本量| 7,096張
驗證樣本量| 639張
測試樣本量| 656張
加載使用方式|自定義資料集
2.1 資料标注
資料集分為train、valid、test三個檔案夾,每個檔案夾内包含12個分類檔案夾,每個分類檔案夾内是具體的樣本圖檔。
.
├── test|train|valid
│ ├── dog
│ ├── dragon
│ ├── goat
│ ├── horse
│ ├── monkey
│ ├── ox
│ ├── pig
│ ├── rabbit
│ ├── ratt
│ ├── rooster
│ ├── snake
│ └── tiger
我們對這些樣本進行一個标注處理,最終生成train.txt/valid.txt/test.txt三個資料标注檔案。
```python
config.py
__all__ = ['CONFIG', 'get']
CONFIG = {
'model_save_dir': "./output/zodiac",
'num_classes': 12,
'total_images': 7096,
'epochs': 20,
'batch_size': 32,
'image_shape': [3, 224, 224],
'LEARNING_RATE': {
'params': {
'lr': 0.00375
}
},
'OPTIMIZER': {
'params': {
'momentum': 0.9
},
'regularizer': {
'function': 'L2',
'factor': 0.000001
}
},
'LABEL_MAP': [
"ratt",
"ox",
"tiger",
"rabbit",
"dragon",
"snake",
"horse",
"goat",
"monkey",
"rooster",
"dog",
"pig",
]
}
def get(full_path):
for id, name in enumerate(full_path.split('.')):
if id == 0:
config = CONFIG
config = config[name]
return config
import io
import os
from PIL import Image
from config import get
# 資料集根目錄
DATA_ROOT = 'signs'
# 标簽List
LABEL_MAP = get('LABEL_MAP')
# 标注生成函數
def generate_annotation(mode):
# 建立标注檔案
with open('{}/{}.txt'.format(DATA_ROOT, mode), 'w') as f:
# 對應每個用途的資料檔案夾,train/valid/test
train_dir = '{}/{}'.format(DATA_ROOT, mode)
# 周遊檔案夾,擷取裡面的分類檔案夾
for path in os.listdir(train_dir):
# 标簽對應的數字索引,實際标注的時候直接使用數字索引
label_index = LABEL_MAP.index(path)
# 圖像樣本所在的路徑
image_path = '{}/{}'.format(train_dir, path)
# 周遊所有圖像
for image in os.listdir(image_path):
# 圖像完整路徑和名稱
image_file = '{}/{}'.format(image_path, image)
try:
# 驗證圖檔格式是否ok
with open(image_file, 'rb') as f_img:
image = Image.open(io.BytesIO(f_img.read()))
image.load()
if image.mode == 'RGB':
f.write('{}\t{}\n'.format(image_file, label_index))
except:
continue
generate_annotation('train') # 生成訓練集标注檔案
generate_annotation('valid') # 生成驗證集标注檔案
generate_annotation('test') # 生成測試集标注檔案
2.2 資料集定義
接下來我們使用标注好的檔案進行資料集類的定義,友善後續模型訓練使用。
2.2.1 導入相關庫
import paddle
import numpy as np
from config import get
HWC和CHW差別
- C代表:輸入通道數
- H/W分别代表圖檔的高、寬
NCHW
- N代表樣本數
to_tensor
paddle.vision.transforms.to_tensor(pic, data_format=‘CHW’)[源代碼]
将 PIL.Image 或 numpy.ndarray 轉換成 paddle.Tensor。
- 形狀為 (H x W x C)的輸入資料 PIL.Image 或 numpy.ndarray 轉換為 (C x H x W)。 如果想保持形狀不變,可以将參數 data_format 設定為 ‘HWC’。
- 同時,如果輸入的 PIL.Image 的 mode 是 (L, LA, P, I, F, RGB, YCbCr, RGBA, CMYK, 1) 其中一種,或者輸入的 numpy.ndarray 資料類型是 ‘uint8’,那個會将輸入資料從(0-255)的範圍縮放到 (0-1)的範圍。其他的情況,則保持輸入不變。
2.2.2 導入資料集的定義實作
我們資料集的代碼實作是在dataset.py中。
import paddle
import paddle.vision.transforms as T
import numpy as np
from config import get
from PIL import Image
__all__ = ['ZodiacDataset']
# 定義圖像的大小
image_shape = get('image_shape') #'image_shape': [3, 224, 224],
IMAGE_SIZE = (image_shape[1], image_shape[2])
class ZodiacDataset(paddle.io.Dataset):
"""
十二生肖資料集類的定義
"""
def __init__(self, mode='train'):
"""
初始化函數
"""
assert mode in ['train', 'test', 'valid'], 'mode is one of train, test, valid.' #判斷參數合法性
self.data = []
"""
根據不同模式選擇不同的資料标注檔案
"""
with open('signs/{}.txt'.format(mode)) as f:
for line in f.readlines():
info = line.strip().split('\t')
if len(info) > 0:
self.data.append([info[0].strip(), info[1].strip()])#進行切分形成數組,每個數組包含圖像的位址和label
if mode == 'train':
self.transforms = T.Compose([
T.RandomResizedCrop(IMAGE_SIZE), # 随機裁剪大小,裁剪地方不同等于間接增加了資料樣本 300*300-224*224
T.RandomHorizontalFlip(0.5), # 随機水準翻轉,機率0.5,也是等于得到一個新的圖像
T.ToTensor(), # 資料的格式轉換和标準化 HWC => CHW
T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) # 圖像歸一化
])
else: #評估模式:沒必要進行水準翻轉增加樣本量了,主要是想看看效果
self.transforms = T.Compose([
T.Resize(256), # 圖像大小修改
T.RandomCrop(IMAGE_SIZE), # 随機裁剪,
T.ToTensor(), # 資料的格式轉換和标準化 HWC => CHW
T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) # 圖像歸一化
])
def __getitem__(self, index):
"""
根據索引擷取單個樣本
"""
image_file, label = self.data[index]
image = Image.open(image_file)
#轉成RGB模式,三通道的
if image.mode != 'RGB':
image = image.convert('RGB')
image = self.transforms(image)#得到預處理後的結果
return image, np.array(label, dtype='int64')#對label做個資料轉換,int類型轉成numpy
def __len__(self):
"""
擷取樣本總數
"""
return len(self.data)
from dataset import ZodiacDataset
2.3.3 執行個體化資料集類
根據所使用的資料集需求執行個體化資料集類,并檢視總樣本量。
train_dataset = ZodiacDataset(mode='train')
valid_dataset = ZodiacDataset(mode='valid')
print('訓練資料集:{}張;驗證資料集:{}張'.format(len(train_dataset), len(valid_dataset)))
3.模型選擇和開發
3.1 網絡建構
本次我們使用ResNet50網絡來完成我們的案例實踐。
1)ResNet系列網絡
2)ResNet50結構
3)殘差區塊
4)ResNet其他版本
network = paddle.vision.models.resnet50(num_classes=get('num_classes'), pretrained=True)
#pretrained=True使用别人已經訓練好的預訓練模型進行訓練網絡
model = paddle.Model(network)
model.summary((-1, ) + tuple(get('image_shape')))
-------------------------------------------------------------------------------
Layer (type) Input Shape Output Shape Param #
===============================================================================
Conv2D-1 [[1, 3, 224, 224]] [1, 64, 112, 112] 9,408
BatchNorm2D-1 [[1, 64, 112, 112]] [1, 64, 112, 112] 256
ReLU-1 [[1, 64, 112, 112]] [1, 64, 112, 112] 0
MaxPool2D-1 [[1, 64, 112, 112]] [1, 64, 56, 56] 0
Conv2D-3 [[1, 64, 56, 56]] [1, 64, 56, 56] 4,096
BatchNorm2D-3 [[1, 64, 56, 56]] [1, 64, 56, 56] 256
ReLU-2 [[1, 256, 56, 56]] [1, 256, 56, 56] 0
Conv2D-4 [[1, 64, 56, 56]] [1, 64, 56, 56] 36,864
BatchNorm2D-4 [[1, 64, 56, 56]] [1, 64, 56, 56] 256
Conv2D-5 [[1, 64, 56, 56]] [1, 256, 56, 56] 16,384
BatchNorm2D-5 [[1, 256, 56, 56]] [1, 256, 56, 56] 1,024
Conv2D-2 [[1, 64, 56, 56]] [1, 256, 56, 56] 16,384
BatchNorm2D-2 [[1, 256, 56, 56]] [1, 256, 56, 56] 1,024
BottleneckBlock-1 [[1, 64, 56, 56]] [1, 256, 56, 56] 0
Conv2D-6 [[1, 256, 56, 56]] [1, 64, 56, 56] 16,384
BatchNorm2D-6 [[1, 64, 56, 56]] [1, 64, 56, 56] 256
ReLU-3 [[1, 256, 56, 56]] [1, 256, 56, 56] 0
Conv2D-7 [[1, 64, 56, 56]] [1, 64, 56, 56] 36,864
BatchNorm2D-7 [[1, 64, 56, 56]] [1, 64, 56, 56] 256
Conv2D-8 [[1, 64, 56, 56]] [1, 256, 56, 56] 16,384
BatchNorm2D-8 [[1, 256, 56, 56]] [1, 256, 56, 56] 1,024
BottleneckBlock-2 [[1, 256, 56, 56]] [1, 256, 56, 56] 0
Conv2D-9 [[1, 256, 56, 56]] [1, 64, 56, 56] 16,384
BatchNorm2D-9 [[1, 64, 56, 56]] [1, 64, 56, 56] 256
ReLU-4 [[1, 256, 56, 56]] [1, 256, 56, 56] 0
Conv2D-10 [[1, 64, 56, 56]] [1, 64, 56, 56] 36,864
BatchNorm2D-10 [[1, 64, 56, 56]] [1, 64, 56, 56] 256
Conv2D-11 [[1, 64, 56, 56]] [1, 256, 56, 56] 16,384
BatchNorm2D-11 [[1, 256, 56, 56]] [1, 256, 56, 56] 1,024
BottleneckBlock-3 [[1, 256, 56, 56]] [1, 256, 56, 56] 0
Conv2D-13 [[1, 256, 56, 56]] [1, 128, 56, 56] 32,768
BatchNorm2D-13 [[1, 128, 56, 56]] [1, 128, 56, 56] 512
ReLU-5 [[1, 512, 28, 28]] [1, 512, 28, 28] 0
Conv2D-14 [[1, 128, 56, 56]] [1, 128, 28, 28] 147,456
BatchNorm2D-14 [[1, 128, 28, 28]] [1, 128, 28, 28] 512
Conv2D-15 [[1, 128, 28, 28]] [1, 512, 28, 28] 65,536
BatchNorm2D-15 [[1, 512, 28, 28]] [1, 512, 28, 28] 2,048
Conv2D-12 [[1, 256, 56, 56]] [1, 512, 28, 28] 131,072
BatchNorm2D-12 [[1, 512, 28, 28]] [1, 512, 28, 28] 2,048
BottleneckBlock-4 [[1, 256, 56, 56]] [1, 512, 28, 28] 0
Conv2D-16 [[1, 512, 28, 28]] [1, 128, 28, 28] 65,536
BatchNorm2D-16 [[1, 128, 28, 28]] [1, 128, 28, 28] 512
ReLU-6 [[1, 512, 28, 28]] [1, 512, 28, 28] 0
Conv2D-17 [[1, 128, 28, 28]] [1, 128, 28, 28] 147,456
BatchNorm2D-17 [[1, 128, 28, 28]] [1, 128, 28, 28] 512
Conv2D-18 [[1, 128, 28, 28]] [1, 512, 28, 28] 65,536
BatchNorm2D-18 [[1, 512, 28, 28]] [1, 512, 28, 28] 2,048
BottleneckBlock-5 [[1, 512, 28, 28]] [1, 512, 28, 28] 0
Conv2D-19 [[1, 512, 28, 28]] [1, 128, 28, 28] 65,536
BatchNorm2D-19 [[1, 128, 28, 28]] [1, 128, 28, 28] 512
ReLU-7 [[1, 512, 28, 28]] [1, 512, 28, 28] 0
Conv2D-20 [[1, 128, 28, 28]] [1, 128, 28, 28] 147,456
BatchNorm2D-20 [[1, 128, 28, 28]] [1, 128, 28, 28] 512
Conv2D-21 [[1, 128, 28, 28]] [1, 512, 28, 28] 65,536
BatchNorm2D-21 [[1, 512, 28, 28]] [1, 512, 28, 28] 2,048
BottleneckBlock-6 [[1, 512, 28, 28]] [1, 512, 28, 28] 0
Conv2D-22 [[1, 512, 28, 28]] [1, 128, 28, 28] 65,536
BatchNorm2D-22 [[1, 128, 28, 28]] [1, 128, 28, 28] 512
ReLU-8 [[1, 512, 28, 28]] [1, 512, 28, 28] 0
Conv2D-23 [[1, 128, 28, 28]] [1, 128, 28, 28] 147,456
BatchNorm2D-23 [[1, 128, 28, 28]] [1, 128, 28, 28] 512
Conv2D-24 [[1, 128, 28, 28]] [1, 512, 28, 28] 65,536
BatchNorm2D-24 [[1, 512, 28, 28]] [1, 512, 28, 28] 2,048
BottleneckBlock-7 [[1, 512, 28, 28]] [1, 512, 28, 28] 0
Conv2D-26 [[1, 512, 28, 28]] [1, 256, 28, 28] 131,072
BatchNorm2D-26 [[1, 256, 28, 28]] [1, 256, 28, 28] 1,024
ReLU-9 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0
Conv2D-27 [[1, 256, 28, 28]] [1, 256, 14, 14] 589,824
BatchNorm2D-27 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024
Conv2D-28 [[1, 256, 14, 14]] [1, 1024, 14, 14] 262,144
BatchNorm2D-28 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 4,096
Conv2D-25 [[1, 512, 28, 28]] [1, 1024, 14, 14] 524,288
BatchNorm2D-25 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 4,096
BottleneckBlock-8 [[1, 512, 28, 28]] [1, 1024, 14, 14] 0
Conv2D-29 [[1, 1024, 14, 14]] [1, 256, 14, 14] 262,144
BatchNorm2D-29 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024
ReLU-10 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0
Conv2D-30 [[1, 256, 14, 14]] [1, 256, 14, 14] 589,824
BatchNorm2D-30 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024
Conv2D-31 [[1, 256, 14, 14]] [1, 1024, 14, 14] 262,144
BatchNorm2D-31 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 4,096
BottleneckBlock-9 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0
Conv2D-32 [[1, 1024, 14, 14]] [1, 256, 14, 14] 262,144
BatchNorm2D-32 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024
ReLU-11 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0
Conv2D-33 [[1, 256, 14, 14]] [1, 256, 14, 14] 589,824
BatchNorm2D-33 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024
Conv2D-34 [[1, 256, 14, 14]] [1, 1024, 14, 14] 262,144
BatchNorm2D-34 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 4,096
BottleneckBlock-10 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0
Conv2D-35 [[1, 1024, 14, 14]] [1, 256, 14, 14] 262,144
BatchNorm2D-35 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024
ReLU-12 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0
Conv2D-36 [[1, 256, 14, 14]] [1, 256, 14, 14] 589,824
BatchNorm2D-36 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024
Conv2D-37 [[1, 256, 14, 14]] [1, 1024, 14, 14] 262,144
BatchNorm2D-37 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 4,096
BottleneckBlock-11 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0
Conv2D-38 [[1, 1024, 14, 14]] [1, 256, 14, 14] 262,144
BatchNorm2D-38 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024
ReLU-13 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0
Conv2D-39 [[1, 256, 14, 14]] [1, 256, 14, 14] 589,824
BatchNorm2D-39 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024
Conv2D-40 [[1, 256, 14, 14]] [1, 1024, 14, 14] 262,144
BatchNorm2D-40 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 4,096
BottleneckBlock-12 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0
Conv2D-41 [[1, 1024, 14, 14]] [1, 256, 14, 14] 262,144
BatchNorm2D-41 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024
ReLU-14 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0
Conv2D-42 [[1, 256, 14, 14]] [1, 256, 14, 14] 589,824
BatchNorm2D-42 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024
Conv2D-43 [[1, 256, 14, 14]] [1, 1024, 14, 14] 262,144
BatchNorm2D-43 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 4,096
BottleneckBlock-13 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0
Conv2D-45 [[1, 1024, 14, 14]] [1, 512, 14, 14] 524,288
BatchNorm2D-45 [[1, 512, 14, 14]] [1, 512, 14, 14] 2,048
ReLU-15 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 0
Conv2D-46 [[1, 512, 14, 14]] [1, 512, 7, 7] 2,359,296
BatchNorm2D-46 [[1, 512, 7, 7]] [1, 512, 7, 7] 2,048
Conv2D-47 [[1, 512, 7, 7]] [1, 2048, 7, 7] 1,048,576
BatchNorm2D-47 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 8,192
Conv2D-44 [[1, 1024, 14, 14]] [1, 2048, 7, 7] 2,097,152
BatchNorm2D-44 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 8,192
BottleneckBlock-14 [[1, 1024, 14, 14]] [1, 2048, 7, 7] 0
Conv2D-48 [[1, 2048, 7, 7]] [1, 512, 7, 7] 1,048,576
BatchNorm2D-48 [[1, 512, 7, 7]] [1, 512, 7, 7] 2,048
ReLU-16 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 0
Conv2D-49 [[1, 512, 7, 7]] [1, 512, 7, 7] 2,359,296
BatchNorm2D-49 [[1, 512, 7, 7]] [1, 512, 7, 7] 2,048
Conv2D-50 [[1, 512, 7, 7]] [1, 2048, 7, 7] 1,048,576
BatchNorm2D-50 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 8,192
BottleneckBlock-15 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 0
Conv2D-51 [[1, 2048, 7, 7]] [1, 512, 7, 7] 1,048,576
BatchNorm2D-51 [[1, 512, 7, 7]] [1, 512, 7, 7] 2,048
ReLU-17 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 0
Conv2D-52 [[1, 512, 7, 7]] [1, 512, 7, 7] 2,359,296
BatchNorm2D-52 [[1, 512, 7, 7]] [1, 512, 7, 7] 2,048
Conv2D-53 [[1, 512, 7, 7]] [1, 2048, 7, 7] 1,048,576
BatchNorm2D-53 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 8,192
BottleneckBlock-16 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 0
AdaptiveAvgPool2D-1 [[1, 2048, 7, 7]] [1, 2048, 1, 1] 0
Linear-1 [[1, 2048]] [1, 12] 24,588
===============================================================================
Total params: 23,585,740
Trainable params: 23,479,500
Non-trainable params: 106,240
-------------------------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 261.48
Params size (MB): 89.97
Estimated Total Size (MB): 352.02
-------------------------------------------------------------------------------
{'total_params': 23585740, 'trainable_params': 23479500}
4.模型訓練和優化
CosineAnnealingDecay
class paddle.optimizer.lr.CosineAnnealingDecay(learningrate, Tmax, etamin=0, lastepoch=- 1, verbose=False)[源代碼]
該接口使用 cosine annealing 的政策來動态調整學習率。
η t = η min + 1 2 ( η max − η min ) ( 1 + cos ( T c u r T max π ) ) , T c u r ≠ ( 2 k + 1 ) T max η t + 1 = η t + 1 2 ( η max − η min ) ( 1 − cos ( 1 T max π ) ) , T c u r = ( 2 k + 1 ) T max \begin{aligned} \eta_{t} &=\eta_{\min }+\frac{1}{2}\left(\eta_{\max }-\eta_{\min }\right)\left(1+\cos \left(\frac{T_{c u r}}{T_{\max }} \pi\right)\right), & T_{c u r} \neq(2 k+1) T_{\max } \\ \eta_{t+1} &=\eta_{t}+\frac{1}{2}\left(\eta_{\max }-\eta_{\min }\right)\left(1-\cos \left(\frac{1}{T_{\max }} \pi\right)\right), & T_{c u r}=(2 k+1) T_{\max } \end{aligned} ηtηt+1=ηmin+21(ηmax−ηmin)(1+cos(TmaxTcurπ)),=ηt+21(ηmax−ηmin)(1−cos(Tmax1π)),Tcur=(2k+1)TmaxTcur=(2k+1)Tmax
ηmax 的初始值為 learning_rate, Tcur 是SGDR(重新開機訓練SGD)訓練過程中的目前訓練輪數。SGDR的訓練方法可以參考文檔 SGDR: Stochastic Gradient Descent with Warm Restarts. 這裡隻是實作了 cosine annealing 動态學習率,熱啟訓練部分沒有實作。
參數:
- learning_rate (float) - 初始學習率,也就是公式中的 ηmax ,資料類型為Python float。
- T_max (float|int) - 訓練的上限輪數,是餘弦衰減周期的一半
- eta_min (float|int, 可選) - 學習率的最小值,即公式中的 ηmin 。預設值為0。
- last_epoch (int,可選) - 上一輪的輪數,重新開機訓練時設定為上一輪的epoch數。預設值為 -1,則為初始學習率。
- verbose (bool,可選) - 如果是 True ,則在每一輪更新時在标準輸出 stdout 輸出一條資訊。預設值為 False
傳回:用于調整學習率的 CosineAnnealingDecay 執行個體對象
Momentum
class paddle.optimizer.Momentum(learningrate=0.001, momentum=0.9, parameters=None, usenesterov=False, weightdecay=None, gradclip=None, name=None)[源代碼]
該接口實作含有速度狀态的Simple Momentum 優化器
該優化器含有牛頓動量标志,公式更新如下:
更新公式如下:
參數:
-
learning_rate (float|_LRScheduler, 可選) -
學習率,用于參數更新的計算。可以是一個浮點型值或者一個_LRScheduler類,預設值為0.001
- momentum (float, 可選) - 動量因子
- parameters (list, 可選) -指定優化器需要優化的參數。在動态圖模式下必須提供該參數;在靜态圖模式下預設值為None,這時所有的參數都将被優化。
- use_nesterov (bool, 可選) - 賦能牛頓動量,預設值False。
-
weight_decay (float|Tensor, 可選) - 權重衰減系數,是一個float類型或者shape為[1]
,資料類型為float32的Tensor類型。預設值為0.01
- grad_clip (GradientClipBase, 可選) – 梯度裁剪的政策,支援三種裁剪政策: cn_api_fluid_clip_GradientClipByGlobalNorm 、 cn_api_fluid_clip_GradientClipByNorm 、 cn_api_fluid_clip_GradientClipByValue 。 預設值為None,此時将不進行梯度裁剪。
- name (str, 可選)- 該參數供開發人員列印調試資訊時使用,具體用法請參見 Name ,預設值為None
API參考連結
https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/optimizer/momentum/Momentum_cn.html
EPOCHS = get('epochs')
BATCH_SIZE = get('batch_size')
def create_optim(parameters):
step_each_epoch = get('total_images') // get('batch_size')
lr = paddle.optimizer.lr.CosineAnnealingDecay(learning_rate=get('LEARNING_RATE.params.lr'),
T_max=step_each_epoch * EPOCHS)
return paddle.optimizer.Momentum(learning_rate=lr,
parameters=parameters,
weight_decay=paddle.regularizer.L2Decay(get('OPTIMIZER.regularizer.factor'))) #正則化來提升精度
# 模型訓練配置
model.prepare(create_optim(network.parameters()), # 優化器
paddle.nn.CrossEntropyLoss(), # 損失函數
paddle.metric.Accuracy(topk=(1, 5))) # 評估名額
# 訓練可視化VisualDL工具的回調函數
visualdl = paddle.callbacks.VisualDL(log_dir='visualdl_log')
# 啟動模型全流程訓練
model.fit(train_dataset, # 訓練資料集
valid_dataset, # 評估資料集
epochs=EPOCHS, # 總的訓練輪次
batch_size=BATCH_SIZE, # 批次計算的樣本量大小
shuffle=True, # 是否打亂樣本集
verbose=1, # 日志展示格式
save_dir='./chk_points/', # 分階段的訓練模型存儲路徑
callbacks=[visualdl]) # 回調函數使用
top1 表示預測的第一個答案就是正确答案的準确率
top5 表示預測裡面前五個包含正确答案的準确率
預測可視化:
4.1模型存儲
将我們訓練得到的模型進行儲存,以便後續評估和測試使用。
5 模型評估和測試
5.1 批量預測測試
5.1.1 測試資料集
predict_dataset = ZodiacDataset(mode='test')
print('測試資料集樣本量:{}'.format(len(predict_dataset)))
from paddle.static import InputSpec
# 網絡結構示例化
network = paddle.vision.models.resnet50(num_classes=get('num_classes'))
# 模型封裝
model_2 = paddle.Model(network, inputs=[InputSpec(shape=[-1] + get('image_shape'), dtype='float32', name='image')])
# 訓練好的模型加載
model_2.load(get('model_save_dir'))
# 模型配置
model_2.prepare()
# 執行預測
result = model_2.predict(predict_dataset)
import matplotlib.pyplot as plt
# 樣本映射
LABEL_MAP = get('LABEL_MAP')
def show_img(img, predict):
plt.figure()
plt.title('predict: {}'.format(LABEL_MAP[predict_label]))
image_file, label = predict_dataset.data[idx]
image = Image.open(image_file)
plt.imshow(image)
plt.show()
# 随機取樣本展示
indexs = [50,150 , 250, 350, 450, 00]
for idx in indexs:
predict_label = np.argmax(result[0][idx])
real_label = predict_dataset[idx][1]
show_img(real_label,predict_label )
print('樣本ID:{}, 真實标簽:{}, 預測值:{}'.format(idx, LABEL_MAP[real_label], LABEL_MAP[predict_label]))
#或者不定義函數:
"""
import matplotlib.pyplot as plt
# 樣本映射
LABEL_MAP = get('LABEL_MAP')
# # 抽樣展示
indexs = [50,150 , 250, 350, 450, 00]
for idx in indexs:
predict_label = np.argmax(result[0][idx])
real_label = predict_dataset[idx][1]
print('樣本ID:{}, 真實标簽:{}, 預測值:{}'.format(idx, LABEL_MAP[real_label], LABEL_MAP[predict_label]))
image_file, label = predict_dataset.data[idx]
image = Image.open(image_file)
plt.figure()
plt.title('predict: {}'.format(LABEL_MAP[predict_label]))
plt.imshow(image)
plt.show()
"""
樣本ID:50, 真實标簽:monkey, 預測值:monkey
樣本ID:150, 真實标簽:ratt, 預測值:ratt
樣本ID:450, 真實标簽:tiger, 預測值:tiger
6 模型部署
總結
- 本次講解了四種卷積神經網絡的由來,以及采用resnet50實作了十二生肖分類項目
- 本次項目重點在于資料集自定義、以及建立優化器。來使模型更加靈活可改動也提高準确率和有助于模型快速收斂。
- 這裡還是推薦模型封裝不要采用高層api 自己用Sub Class寫法或者用Sequential寫法。嘗試寫寫看雖然層數比較多!