目录

NLP-40残差神经网络

【NLP 40、残差神经网络】

—— 25.3.16

一、残差神经网络的核心思想

  1. 残差映射 :传统神经网络直接学习输入到输出的复杂映射 H(x),而残差网络改为学习残差函数 F(x)=H(x)−x,最终输出为 y=F(x)+x ,这种设计使得深层网络即使无法学习有效特征,也能通过恒等映射 x 保持性能不退化。

  2. 跳跃连接(Shortcut Connection)​

    在残差块中,输入通过跨层连接直接传递到输出端,与经过卷积层、激活函数后的结果相加。这种设计为梯度提供了“直达路径”,缓解了反向传播中的梯度消失问题

二、残差块的设计

1.BasicBlock

  • 含 ​ 两个 3×3 卷积层 ,每层后接批量归一化(BatchNorm)和 ReLU 激活函数
  • 输入通过跳跃连接直接与第二个卷积层的输出相加,再经过一次 ReLU 激活
class BasicBlock(nn.Module):
    def __init__(self, in_channels, out_channels, stride=1):
        super().__init__()
        self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1)
        self.bn1 = nn.BatchNorm2d(out_channels)
        self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1)
        self.bn2 = nn.BatchNorm2d(out_channels)
        self.shortcut = nn.Identity()  # 恒等映射(输入输出通道相同时)
        if stride != 1 or in_channels != out_channels:
            self.shortcut = nn.Sequential(
                nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride),
                nn.BatchNorm2d(out_channels)
            )
    
    def forward(self, x):
        out = F.relu(self.bn1(self.conv1(x)))
        out = self.bn2(self.conv2(out))
        out += self.shortcut(x)  # 跳跃连接
        return F.relu(out)

2.Bottleneck

  • 使用 ​ 1×1→3×3→1×1 的卷积组合 ,通过降维和升维减少计算量
  • 典型应用于 ResNet-50/101/152 等深层变体

三、残差网络如何解决梯度消失

1.​ 梯度传播公式

根据链式法则,残差块的梯度为: https://i-blog.csdnimg.cn/direct/990bb8a4271643089d9974c68270e7d0.png

即使 https://i-blog.csdnimg.cn/direct/638b93873b9044f78ab26f138fed928f.png 趋近于零,梯度仍可通过恒等项 1 有效传播

2.实际效果

跳跃连接使得反向传播时梯度可直接回传到浅层,避免指数级衰减实验表明,ResNet-152(152层)的训练误差低于普通 CNN 的 20 层网络

3.代码示例

import torch
import torch.nn as nn
import torch.nn.functional as F

class ResNet18(nn.Module):
    def __init__(self, num_classes=10):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3)
        self.bn1 = nn.BatchNorm2d(64)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        
        # 残差块组
        self.layer1 = self._make_layer(64, 64, stride=1, num_blocks=2)
        self.layer2 = self._make_layer(64, 128, stride=2, num_blocks=2)
        self.layer3 = self._make_layer(128, 256, stride=2, num_blocks=2)
        self.layer4 = self._make_layer(256, 512, stride=2, num_blocks=2)
        
        self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
        self.fc = nn.Linear(512, num_classes)
    
    def _make_layer(self, in_channels, out_channels, stride, num_blocks):
        layers = [BasicBlock(in_channels, out_channels, stride)]
        for _ in range(num_blocks-1):
            layers.append(BasicBlock(out_channels, out_channels, stride=1))
        return nn.Sequential(*layers)
    
    def forward(self, x):
        x = F.relu(self.bn1(self.conv1(x)))  # 初始卷积层
        x = self.maxpool(x)
        
        x = self.layer1(x)  # 4组残差块
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)
        
        x = self.avgpool(x)  # 全局平均池化
        x = torch.flatten(x, 1)
        x = self.fc(x)
        return x

# 使用示例
model = ResNet18(num_classes=1000)
input_tensor = torch.randn(1, 3, 224, 224)
output = model(input_tensor)
print(output.shape)  # torch.Size([1, 1000])