Pytorch:如何将 L1 正则化器添加到激活中?

Pytorch: how to add L1 regularizer to activations?(Pytorch:如何将 L1 正则化器添加到激活中?)

本文介绍了Pytorch:如何将 L1 正则化器添加到激活中?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想将 L1 正则化器添加到 ReLU 的激活输出中.更一般地说,如何将正则化器仅添加到网络中的特定层?

I would like to add the L1 regularizer to the activations output from a ReLU. More generally, how does one add a regularizer only to a particular layer in the network?

相关资料:

  • 这个类似的帖子是指添加L2 正则化,但它似乎给网络的所有层添加了正则化惩罚.

  • This similar post refers to adding L2 regularization, but it appears to add the regularization penalty to all layers of the network.

nn.modules.loss.L1Loss() 似乎很相关,但我还不明白如何使用它.

nn.modules.loss.L1Loss() seems relevant, but I do not yet understand how to use this.

遗留模块 L1Penalty 似乎也很相关,但为什么它已被弃用?

The legacy module L1Penalty seems relevant also, but why has it been deprecated?

推荐答案

您可以这样做:

  • 在您的模块的前向返回最终输出和要对其应用 L1 正则化的层的输出中
  • loss 变量将是输出 w.r.t. 的交叉熵损失的总和.目标和 L1 惩罚.
  • In your Module's forward return final output and layers' output for which you want to apply L1 regularization
  • loss variable will be sum of cross entropy loss of output w.r.t. targets and L1 penalties.

这是一个示例代码

import torch
from torch.autograd import Variable
from torch.nn import functional as F


class MLP(torch.nn.Module):
    def __init__(self):
        super(MLP, self).__init__()
        self.linear1 = torch.nn.Linear(128, 32)
        self.linear2 = torch.nn.Linear(32, 16)
        self.linear3 = torch.nn.Linear(16, 2)

    def forward(self, x):
        layer1_out = F.relu(self.linear1(x))
        layer2_out = F.relu(self.linear2(layer1_out))
        out = self.linear3(layer2_out)
        return out, layer1_out, layer2_out

batchsize = 4
lambda1, lambda2 = 0.5, 0.01

model = MLP()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)

# usually following code is looped over all batches 
# but let's just do a dummy batch for brevity

inputs = Variable(torch.rand(batchsize, 128))
targets = Variable(torch.ones(batchsize).long())

optimizer.zero_grad()
outputs, layer1_out, layer2_out = model(inputs)
cross_entropy_loss = F.cross_entropy(outputs, targets)

all_linear1_params = torch.cat([x.view(-1) for x in model.linear1.parameters()])
all_linear2_params = torch.cat([x.view(-1) for x in model.linear2.parameters()])
l1_regularization = lambda1 * torch.norm(all_linear1_params, 1)
l2_regularization = lambda2 * torch.norm(all_linear2_params, 2)

loss = cross_entropy_loss + l1_regularization + l2_regularization
loss.backward()
optimizer.step()

这篇关于Pytorch:如何将 L1 正则化器添加到激活中?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本文标题为:Pytorch:如何将 L1 正则化器添加到激活中?

基础教程推荐