PyTorch深度学习实践Part10——卷积神经网络(基础篇)

Basic CNN

CNN(Convolutional Neural Network)结构:特征提取+分类。

image-20210118185651529

通道(Channel)×纵轴(Width)×横轴(Height),起点为左上角。

image-20210118194753606

Patch逐Width扫描,矩阵作数乘(哈达玛积)。

image-20210118195245850

多通道的卷积中,每一个通道都要配一个卷积核,并相加。

深度学习里的卷积是数学中的互相关,但是惯例称为卷积,和数学中的卷积有点不同,但是不影响。

n*n的卷积核,上下各-(n-1)/2,原长宽-(n-1)。n一般采用奇数,卷积形状一般都是正方形,在pytorch中奇偶、长方形都行。

image-20210118200458601

每一组卷积核的通道数量要求和输入通道是一样的。这种卷积核组的总数和输出通道的数量是一样的。卷积过后,通道就与RGB没有关系了。

卷积(convolution)后,C(Channels)变,W(width)和H(Height)可变可不变,取决于是否填充边缘(padding),不填充则会有边缘损失。

卷积层:保留图像的空间信息。卷积本质上也是线性计算,也是可以优化的权重。

卷积神经网络要求输入输出层是四维张量(Batch, Channel, Width, Height),卷积层是(m输出通道数量, n输入通道数量, w卷积核宽, h卷积核长),全连接层的输入与输出都是二维张量(B, Input_feature)。

image-20210118200942877

下采样(subsampling)或池化(pooling)后,C不变,W和H变成 原长度/池化长度。(MaxPool2d是下采样常用的一种,n*n最大池化默认步长为n)

池化层与sigmoid一样,没有权重。

image-20210118222617886

卷积(线性变换),激活函数(非线性变换),池化;这个过程若干次后,view打平,进入全连接层。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
import torch

in_channels, out_channels = 1, 10
width, height = 10, 10
kernel_size = 3
batch_size = 1
input = [3, 4, 6, 5, 7, 2, 2, 7, 2, 2,
4, 6, 8, 2, 1, 6, 6, 1, 6, 6,
7, 8, 4, 9, 7, 4, 6, 7, 4, 6,
6, 2, 3, 7, 5, 4, 6, 5, 4, 6,
1, 3, 4, 6, 5, 7, 6, 5, 7, 6,
2, 4, 6, 8, 2, 1, 6, 2, 1, 6,
2, 4, 6, 8, 2, 1, 6, 2, 1, 6,
2, 4, 6, 8, 2, 1, 6, 2, 1, 6,
4, 6, 8, 2, 1, 6, 6, 1, 6, 6,
7, 8, 4, 9, 7, 4, 6, 7, 4, 6]
# view()将其转化成4维
input = torch.Tensor(input).view(batch_size,
in_channels,
width,
height)
# 卷积模型的构造函数中,输入通道数量在前,输出通道数量在后;但是卷积的权重shape是先输出后输入
# padding边缘填充,bias一般卷积不用加偏置,stride步长,kernel_size核大小
conv_layer = torch.nn.Conv2d(in_channels,
out_channels,
kernel_size=kernel_size)
output = conv_layer(input)
print(input.shape) # torch.Size([1, 1, 10, 10])
print(output.shape) # torch.Size([1, 10, 8, 8])
print(conv_layer.weight.shape) # torch.Size([10, 1, 3, 3])

代码实现

image-20210119001547566

image-20210119002051268

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
import torch
from torchvision import transforms
from torchvision import datasets
from torch.utils.data import DataLoader
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt

# prepare dataset

batch_size = 64
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])

train_dataset = datasets.MNIST(root='../dataset/mnist/', train=True, download=True, transform=transform)
train_loader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size)
test_dataset = datasets.MNIST(root='../dataset/mnist/', train=False, download=True, transform=transform)
test_loader = DataLoader(test_dataset, shuffle=False, batch_size=batch_size)


# design model using class

class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5) # 卷积
self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=5)
self.pooling = torch.nn.MaxPool2d(2) # 池化
self.fc = torch.nn.Linear(320, 10) # 线性

def forward(self, x):
# flatten data from (n,1,28,28) to (n, 784)
batch_size = x.size(0) # 先求batch,多少条记录
x = self.pooling(F.relu(self.conv1(x)))
x = self.pooling(F.relu(self.conv2(x)))
x = x.view(batch_size, -1) # -1 此处自动算出的是320
# print("x.shape",x.shape)
x = self.fc(x)
return x


model = Net()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device) # GPU加速

# construct loss and optimizer
criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)


# training cycle forward, backward, update


def train(epoch):
running_loss = 0.0
for batch_idx, data in enumerate(train_loader, 0):
inputs, target = data
inputs, target = inputs.to(device), target.to(device)
optimizer.zero_grad()

outputs = model(inputs)
loss = criterion(outputs, target)
loss.backward()
optimizer.step()

running_loss += loss.item()
if batch_idx % 300 == 299:
print('[%d, %5d] loss: %.3f' % (epoch + 1, batch_idx + 1, running_loss / 300))
running_loss = 0.0


def test():
correct = 0
total = 0
with torch.no_grad():
for data in test_loader:
images, labels = data
images, labels = images.to(device), labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, dim=1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('accuracy on test set: %d %% ' % (100 * correct / total))
return correct / total


if __name__ == '__main__':
epoch_list = []
acc_list = []

for epoch in range(10):
train(epoch)
acc = test()
epoch_list.append(epoch)
acc_list.append(acc)

plt.plot(epoch_list, acc_list)
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.show()