您当前的位置:首页 > IT编程 > python
| C语言 | Java | VB | VC | python | Android | TensorFlow | C++ | oracle | 学术与代码 | cnn卷积神经网络 | gnn | 图像修复 | Keras | 数据集 | Neo4j | 自然语言处理 | 深度学习 | 医学CAD | 医学影像 | 超参数 | pointnet | pytorch | 异常检测 | Transformers | 情感分类 | 知识图谱 |

自学教程:Pytorch实现全连接层的操作

51自学网 2021-10-30 22:41:24
  python
这篇教程Pytorch实现全连接层的操作写得很实用,希望能帮到您。

全连接神经网络(FC)

全连接神经网络是一种最基本的神经网络结构,英文为Full Connection,所以一般简称FC。

FC的准则很简单:神经网络中除输入层之外的每个节点都和上一层的所有节点有连接。

以上一次的MNIST为例

import torchimport torch.utils.datafrom torch import optimfrom torchvision import datasetsfrom torchvision.transforms import transformsimport torch.nn.functional as Fbatch_size = 200learning_rate = 0.001epochs = 20train_loader = torch.utils.data.DataLoader(    datasets.MNIST('mnistdata', train=True, download=False,                   transform=transforms.Compose([                       transforms.ToTensor(),                       transforms.Normalize((0.1307,), (0.3081,))                   ])),    batch_size=batch_size, shuffle=True)test_loader = torch.utils.data.DataLoader(    datasets.MNIST('mnistdata', train=False, download=False,                   transform=transforms.Compose([                       transforms.ToTensor(),                       transforms.Normalize((0.1307,), (0.3081,))                   ])),    batch_size=batch_size, shuffle=True)w1, b1 = torch.randn(200, 784, requires_grad=True), torch.zeros(200, requires_grad=True)w2, b2 = torch.randn(200, 200, requires_grad=True), torch.zeros(200, requires_grad=True)w3, b3 = torch.randn(10, 200, requires_grad=True), torch.zeros(10, requires_grad=True)torch.nn.init.kaiming_normal_(w1)torch.nn.init.kaiming_normal_(w2)torch.nn.init.kaiming_normal_(w3)def forward(x):    x = x@w1.t() + b1    x = F.relu(x)    x = x@w2.t() + b2    x = F.relu(x)    x = x@w3.t() + b3    x = F.relu(x)    return xoptimizer = optim.Adam([w1, b1, w2, b2, w3, b3], lr=learning_rate)criteon = torch.nn.CrossEntropyLoss()for epoch in range(epochs):    for batch_idx, (data, target) in enumerate(train_loader):        data = data.view(-1, 28*28)        logits = forward(data)        loss = criteon(logits, target)        optimizer.zero_grad()        loss.backward()        optimizer.step()        if batch_idx % 100 == 0:            print('Train Epoch : {} [{}/{} ({:.0f}%)]/tLoss: {:.6f}'.format(                epoch, batch_idx*len(data), len(train_loader.dataset),                100.*batch_idx/len(train_loader), loss.item()            ))    test_loss = 0    correct = 0    for data, target in test_loader:        data = data.view(-1, 28*28)        logits = forward(data)        test_loss += criteon(logits, target).item()        pred = logits.data.max(1)[1]        correct += pred.eq(target.data).sum()    test_loss /= len(test_loader.dataset)    print('/nTest set : Averge loss: {:.4f}, Accurancy: {}/{}({:.3f}%)'.format(        test_loss, correct, len(test_loader.dataset),        100.*correct/len(test_loader.dataset)        ))

我们将每个w和b都进行了定义,并且自己写了一个forward函数。如果我们采用了全连接层,那么整个代码也会更加简介明了。

首先,我们定义自己的网络结构的类:

class MLP(nn.Module):    def __init__(self):        super(MLP, self).__init__()        self.model = nn.Sequential(            nn.Linear(784, 200),            nn.LeakyReLU(inplace=True),            nn.Linear(200, 200),            nn.LeakyReLU(inplace=True),            nn.Linear(200, 10),            nn.LeakyReLU(inplace=True)        )    def forward(self, x):        x = self.model(x)        return x

它继承于nn.Moudle,并且自己定义里整个网络结构。

其中inplace的作用是直接复用存储空间,减少新开辟存储空间。

除此之外,它可以直接进行运算,不需要手动定义参数和写出运算语句,更加简便。

同时我们还可以发现,它自动完成了初试化,不需要像之前一样再手动写一个初始化了。

区分nn.Relu和F.relu()

前者是一个类的接口,后者是一个函数式接口。

前者都是大写的,并且调用的的时候需要先实例化才能使用,而后者是小写的可以直接使用。

最重要的是后者的自由度更高,更适合做一些自己定义的操作。

完整代码

import torchimport torch.utils.datafrom torch import optim, nnfrom torchvision import datasetsfrom torchvision.transforms import transformsimport torch.nn.functional as Fbatch_size = 200learning_rate = 0.001epochs = 20train_loader = torch.utils.data.DataLoader(    datasets.MNIST('mnistdata', train=True, download=False,                   transform=transforms.Compose([                       transforms.ToTensor(),                       transforms.Normalize((0.1307,), (0.3081,))                   ])),    batch_size=batch_size, shuffle=True)test_loader = torch.utils.data.DataLoader(    datasets.MNIST('mnistdata', train=False, download=False,                   transform=transforms.Compose([                       transforms.ToTensor(),                       transforms.Normalize((0.1307,), (0.3081,))                   ])),    batch_size=batch_size, shuffle=True)class MLP(nn.Module):    def __init__(self):        super(MLP, self).__init__()        self.model = nn.Sequential(            nn.Linear(784, 200),            nn.LeakyReLU(inplace=True),            nn.Linear(200, 200),            nn.LeakyReLU(inplace=True),            nn.Linear(200, 10),            nn.LeakyReLU(inplace=True)        )    def forward(self, x):        x = self.model(x)        return xdevice = torch.device('cuda:0')net = MLP().to(device)optimizer = optim.Adam(net.parameters(), lr=learning_rate)criteon = nn.CrossEntropyLoss().to(device)for epoch in range(epochs):    for batch_idx, (data, target) in enumerate(train_loader):        data = data.view(-1, 28*28)        data, target = data.to(device), target.to(device)        logits = net(data)        loss = criteon(logits, target)        optimizer.zero_grad()        loss.backward()        optimizer.step()        if batch_idx % 100 == 0:            print('Train Epoch : {} [{}/{} ({:.0f}%)]/tLoss: {:.6f}'.format(                epoch, batch_idx*len(data), len(train_loader.dataset),                100.*batch_idx/len(train_loader), loss.item()            ))    test_loss = 0    correct = 0    for data, target in test_loader:        data = data.view(-1, 28*28)        data, target = data.to(device), target.to(device)        logits = net(data)        test_loss += criteon(logits, target).item()        pred = logits.data.max(1)[1]        correct += pred.eq(target.data).sum()    test_loss /= len(test_loader.dataset)    print('/nTest set : Averge loss: {:.4f}, Accurancy: {}/{}({:.3f}%)'.format(        test_loss, correct, len(test_loader.dataset),        100.*correct/len(test_loader.dataset)        ))

补充:pytorch 实现一个隐层的全连接神经网络

torch.nn 实现 模型的定义,网络层的定义,损失函数的定义。

import torch# N is batch size; D_in is input dimension;# H is hidden dimension; D_out is output dimension.N, D_in, H, D_out = 64, 1000, 100, 10# Create random Tensors to hold inputs and outputsx = torch.randn(N, D_in)y = torch.randn(N, D_out)# Use the nn package to define our model as a sequence of layers. nn.Sequential# is a Module which contains other Modules, and applies them in sequence to# produce its output. Each Linear Module computes output from input using a# linear function, and holds internal Tensors for its weight and bias.model = torch.nn.Sequential(    torch.nn.Linear(D_in, H),    torch.nn.ReLU(),    torch.nn.Linear(H, D_out),)# The nn package also contains definitions of popular loss functions; in this# case we will use Mean Squared Error (MSE) as our loss function.loss_fn = torch.nn.MSELoss(reduction='sum')learning_rate = 1e-4for t in range(500):    # Forward pass: compute predicted y by passing x to the model. Module objects    # override the __call__ operator so you can call them like functions. When    # doing so you pass a Tensor of input data to the Module and it produces    # a Tensor of output data.    y_pred = model(x)    # Compute and print loss. We pass Tensors containing the predicted and true    # values of y, and the loss function returns a Tensor containing the    # loss.    loss = loss_fn(y_pred, y)    print(t, loss.item())    # Zero the gradients before running the backward pass.    model.zero_grad()    # Backward pass: compute gradient of the loss with respect to all the learnable    # parameters of the model. Internally, the parameters of each Module are stored    # in Tensors with requires_grad=True, so this call will compute gradients for    # all learnable parameters in the model.    loss.backward()    # Update the weights using gradient descent. Each parameter is a Tensor, so    # we can access its gradients like we did before.    with torch.no_grad():        for param in model.parameters():            param -= learning_rate * param.grad

上面,我们使用parem= -= learning_rate* param.grad 手动更新参数。

使用torch.optim 自动优化参数。optim这个package提供了各种不同的模型优化方法,包括SGD+momentum, RMSProp, Adam等等。

optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)for t in range(500):    y_pred = model(x)    loss = loss_fn(y_pred, y)    optimizer.zero_grad()    loss.backward()    optimizer.step()

以上为个人经验,希望能给大家一个参考,也希望大家多多支持51zixue.net。如有错误或未考虑完全的地方,望不吝赐教。


pytorch 优化器(optim)不同参数组,不同学习率设置的操作
Django Admin 管理工具的实现
万事OK自学网:51自学网_软件自学网_CAD自学网自学excel、自学PS、自学CAD、自学C语言、自学css3实例,是一个通过网络自主学习工作技能的自学平台,网友喜欢的软件自学网站。