快速入门
导航
快速入门#
本节介绍机器学习中常见任务的 API。
数据准备#
PyTorch 有两个原语来处理数据:DataLoader
和 Dataset
。Dataset
存储样本及其对应的标签,而 DataLoader
在 Dataset
周围包装了可迭代对象。
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor
PyTorch 提供了领域专用库,如 TorchText
、TorchVision
和 TorchAudio,所有这些库都包含了 datasets
。比如:
torchvision.datasets 模块包含许多真实世界的视觉数据的 VisionDataset
对象,如 CIFAR, COCO。在本教程中,我们使用 FashionMNIST 数据集。每个 VisionDataset
包括两个参数:transform
和 target_transform
来分别修改样本和标签。
# 下载训练数据集
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor(),
)
# 下载测试数据集
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor(),
)
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
Using downloaded and verified file: data/FashionMNIST/raw/train-images-idx3-ubyte.gz
Extracting data/FashionMNIST/raw/train-images-idx3-ubyte.gz to data/FashionMNIST/raw
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz
Using downloaded and verified file: data/FashionMNIST/raw/train-labels-idx1-ubyte.gz
Extracting data/FashionMNIST/raw/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz
100.0%
Extracting data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz
119.3%
Extracting data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw
将 Dataset
作为参数传递给 DataLoader
。它在数据集上包装了可迭代对象,并支持自动批处理、采样、洗牌和多进程数据加载。
比如定义 64 的批处理大小,即 dataloader 可迭代对象中的每个元素将返回包含 64 个特征和标签的批处理。
batch_size = 64
# 创建数据 loader
train_dataloader = DataLoader(training_data, batch_size=batch_size)
test_dataloader = DataLoader(test_data, batch_size=batch_size)
for X, y in test_dataloader:
print(f"X [N, C, H, W] 的 shape:{X.shape}")
print(f"y 的 shape 和 dtype:{y.shape} {y.dtype}")
break
X [N, C, H, W] 的 shape:torch.Size([64, 1, 28, 28])
y 的 shape 和 dtype:torch.Size([64]) torch.int64
构建模型#
要在 PyTorch 中定义神经网络,需要创建继承自 torch.nn.Module
的类。在 __init__()
函数中定义网络层,并在 forward()
函数中指定数据将如何传递给网络。为了加速神经网络的运算,将其移动到 GPU(如果可用的话)。
# 获取 CPU 或 GPU 设备进行训练。
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"使用 {device} 设备")
# 定义模型
class NeuralNetwork(nn.Module):
def __init__(self):
super().__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10)
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = NeuralNetwork().to(device)
print(model)
使用 cuda 设备
NeuralNetwork(
(flatten): Flatten(start_dim=1, end_dim=-1)
(linear_relu_stack): Sequential(
(0): Linear(in_features=784, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
(3): ReLU()
(4): Linear(in_features=512, out_features=10, bias=True)
)
)
模型参数优化#
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
在单个训练循环中,模型对训练数据集进行预测(批量反馈给它),并反向传播预测误差以调整模型的参数。
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)
# 计算预测误差
pred = model(X)
loss = loss_fn(pred, y)
# 后向传播
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
还根据测试数据集检查模型的性能,以确保它正在学习。
def test(dataloader, model, loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
model.eval()
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
X, y = X.to(device), y.to(device)
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
print(f"Test Error: \n Accuracy: "
"{(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
训练过程是在几个迭代(epoch)中进行的。在每个时期,模型学习参数以做出更好的预测。打印模型在每个时期的精度和损失;希望在每个时期看到精度的提高和损失的减少。
epochs = 5
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train(train_dataloader, model, loss_fn, optimizer)
test(test_dataloader, model, loss_fn)
print("Done!")
保存模型#
保存模型的常见方法是序列化内部状态字典(包含模型参数)。
torch.save(model.state_dict(), "model.pth")
print("保存 PyTorch 模型状态到 model.pth")
保存 PyTorch 模型状态到 model.pth
加载模型#
加载模型的过程包括重新创建模型结构和将状态字典加载到其中。
model = NeuralNetwork()
model.load_state_dict(torch.load("model.pth"))
<All keys matched successfully>
此模型现在可以用来进行预测。
classes = [
"T-shirt/top",
"Trouser",
"Pullover",
"Dress",
"Coat",
"Sandal",
"Shirt",
"Sneaker",
"Bag",
"Ankle boot",
]
model.eval()
x, y = test_data[0][0], test_data[0][1]
with torch.no_grad():
pred = model(x)
predicted, actual = classes[pred[0].argmax(0)], classes[y]
print(f'Predicted: "{predicted}", Actual: "{actual}"')
Predicted: "Ankle boot", Actual: "Ankle boot"