乐趣区

关于程序员:pytorch-构建神经网络

神经网络由对数据执行操作的层或模块组成。torch.nn 命名空间提供了构建神经网络所需的所有模块。PyTorch 中的每个模块都是 nn.Module 的子类。神经网络自身也是一个模块,但它由其余模块(层)组成。这种嵌套构造容许轻松构建和治理简单的架构。
在接下来的局部中,咱们将构建一个神经网络来对 FashionMNIST 数据集中的图像进行分类。

import os
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets, transforms

1、获取训练设施

咱们心愿可能在 GPU 等硬件加速器(如果可用)上训练咱们的模型。让咱们检查一下 torch.cuda 是否可用,否则咱们持续应用 CPU。

device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using {device} device")

Using cuda device

2、定义模型类

咱们通过子类化定义咱们的神经网络 nn.Module,并在__init__中初始化神经网络层。每个 nn.Module 子类都在 forward 办法中实现对输出数据的操作。

class NeuralNetwork(nn.Module):
    def __init__(self):
        super(NeuralNetwork, self).__init__()
        self.flatten = nn.Flatten()
        self.linear_relu_stack = nn.Sequential(nn.Linear(28*28, 512),
            nn.ReLU(),
            nn.Linear(512, 512),
            nn.ReLU(),
            nn.Linear(512, 10),
        )

    def forward(self, x):
        x = self.flatten(x)
        logits = self.linear_relu_stack(x)
        return logits

咱们创立 NeuralNetwork 的实例,并将其挪动到 device,并打印其构造。

model = NeuralNetwork().to(device)
print(model)
NeuralNetwork((flatten): Flatten(start_dim=1, end_dim=-1)
  (linear_relu_stack): Sequential((0): Linear(in_features=784, out_features=512, bias=True)
    (1): ReLU()
    (2): Linear(in_features=512, out_features=512, bias=True)
    (3): ReLU()
    (4): Linear(in_features=512, out_features=10, bias=True)
  )
)

要应用模型,咱们将输出数据传递给它。这将执行模型的 forward,以及一些后盾操作。但不要间接调用 model.forward()!

在输出上调用模型会返回一个二维张量,其中 dim=0 对应于每个类的 10 个原始预测值的每个输入,而 dim=1 对应于每个输入的各个值。咱们通过一个 nn.Softmax 模块的实例来取得预测概率。

X = torch.rand(1, 28, 28, device=device)
logits = model(X)
pred_probab = nn.Softmax(dim=1)(logits)
y_pred = pred_probab.argmax(1)
print(f"Predicted class: {y_pred}")
Predicted class: tensor([7], device='cuda:0')

3、模型层

让咱们具体阐明 FashionMNIST 模型中的每个层。为了阐明这一点,咱们将抽取 3 张大小为 28×28 的图像的小批量样本,看看当咱们通过网络向前流传时会产生什么。

input_image = torch.rand(3,28,28)
print(input_image.size())

torch.Size([3, 28, 28])

Before ReLU: tensor([[-2.8178e-01, -1.9056e-01, -4.2242e-01,  8.7314e-01,  1.2661e-01,
         -2.3324e-02, -4.4453e-01, -6.5368e-01,  1.0900e-02, -9.2943e-02,
         -5.4586e-01,  1.0089e-01, -4.7385e-02, -2.0129e-01, -1.8159e-01,
          2.3144e-01,  1.1471e-01, -3.1626e-01,  2.5034e-01, -1.9407e-01],
        [ 3.5672e-02,  1.6900e-01, -9.7982e-02,  4.9019e-01,  6.8456e-02,
         -2.2985e-02, -4.1181e-01, -2.0627e-01,  6.6514e-02, -2.0231e-01,
         -4.0097e-01,  4.0099e-02,  4.0609e-03,  2.6897e-01, -8.6034e-02,
          7.1209e-02,  3.4773e-02, -1.3827e-01,  1.3006e-01, -1.8710e-01],
        [-2.2436e-01, -9.2731e-02, -2.0311e-01,  8.1353e-01,  2.7119e-01,
         -1.2389e-01, -6.8425e-01, -4.3364e-01,  1.6657e-01,  1.0768e-01,
         -5.7534e-01,  5.1125e-01, -9.5867e-02,  5.6160e-04,  3.0365e-01,
          4.6523e-01,  1.4591e-03, -2.4904e-02, -1.4773e-01, -1.9605e-01]],
       grad_fn=<AddmmBackward0>)


After ReLU: tensor([[0.0000e+00, 0.0000e+00, 0.0000e+00, 8.7314e-01, 1.2661e-01, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 1.0900e-02, 0.0000e+00, 0.0000e+00, 1.0089e-01,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 2.3144e-01, 1.1471e-01, 0.0000e+00,
         2.5034e-01, 0.0000e+00],
        [3.5672e-02, 1.6900e-01, 0.0000e+00, 4.9019e-01, 6.8456e-02, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 6.6514e-02, 0.0000e+00, 0.0000e+00, 4.0099e-02,
         4.0609e-03, 2.6897e-01, 0.0000e+00, 7.1209e-02, 3.4773e-02, 0.0000e+00,
         1.3006e-01, 0.0000e+00],
        [0.0000e+00, 0.0000e+00, 0.0000e+00, 8.1353e-01, 2.7119e-01, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 1.6657e-01, 1.0768e-01, 0.0000e+00, 5.1125e-01,
         0.0000e+00, 5.6160e-04, 3.0365e-01, 4.6523e-01, 1.4591e-03, 0.0000e+00,
         0.0000e+00, 0.0000e+00]], grad_fn=<ReluBackward0>)
Model structure: NeuralNetwork((flatten): Flatten(start_dim=1, end_dim=-1)
  (linear_relu_stack): Sequential((0): Linear(in_features=784, out_features=512, bias=True)
    (1): ReLU()
    (2): Linear(in_features=512, out_features=512, bias=True)
    (3): ReLU()
    (4): Linear(in_features=512, out_features=10, bias=True)
  )
)


Layer: linear_relu_stack.0.weight | Size: torch.Size([512, 784]) | Values : tensor([[-0.0141, -0.0063,  0.0096,  ..., -0.0356, -0.0350,  0.0024],
        [0.0061, -0.0039,  0.0248,  ..., -0.0206, -0.0082,  0.0338]],
       device='cuda:0', grad_fn=<SliceBackward0>)

Layer: linear_relu_stack.0.bias | Size: torch.Size([512]) | Values : tensor([0.0041, 0.0091], device='cuda:0', grad_fn=<SliceBackward0>)

Layer: linear_relu_stack.2.weight | Size: torch.Size([512, 512]) | Values : tensor([[-0.0261, -0.0256, -0.0284,  ...,  0.0257,  0.0216, -0.0032],
        [-0.0419,  0.0007, -0.0021,  ..., -0.0327,  0.0298,  0.0193]],
       device='cuda:0', grad_fn=<SliceBackward0>)

Layer: linear_relu_stack.2.bias | Size: torch.Size([512]) | Values : tensor([0.0280, 0.0200], device='cuda:0', grad_fn=<SliceBackward0>)

Layer: linear_relu_stack.4.weight | Size: torch.Size([10, 512]) | Values : tensor([[-2.6175e-02,  1.3856e-03, -8.3709e-05,  ..., -7.1290e-03,
          3.2877e-02,  1.9994e-02],
        [-5.5316e-04,  1.1257e-02, -2.5447e-02,  ...,  3.3785e-02,
          1.1102e-02, -1.8905e-02]], device='cuda:0', grad_fn=<SliceBackward0>)

Layer: linear_relu_stack.4.bias | Size: torch.Size([10]) | Values : tensor([-0.0314,  0.0086], device='cuda:0', grad_fn=<SliceBackward0>)
退出移动版