[{"createTime":1735734952000,"id":1,"img":"hwy_ms_500_252.jpeg","link":"https://activity.huaweicloud.com/cps.html?fromacct=261f35b6-af54-4511-a2ca-910fa15905d1&utm_source=V1g3MDY4NTY=&utm_medium=cps&utm_campaign=201905","name":"华为云秒杀","status":9,"txt":"华为云38元秒杀","type":1,"updateTime":1735747411000,"userId":3},{"createTime":1736173885000,"id":2,"img":"txy_480_300.png","link":"https://cloud.tencent.com/act/cps/redirect?redirect=1077&cps_key=edb15096bfff75effaaa8c8bb66138bd&from=console","name":"腾讯云秒杀","status":9,"txt":"腾讯云限量秒杀","type":1,"updateTime":1736173885000,"userId":3},{"createTime":1736177492000,"id":3,"img":"aly_251_140.png","link":"https://www.aliyun.com/minisite/goods?userCode=pwp8kmv3","memo":"","name":"阿里云","status":9,"txt":"阿里云2折起","type":1,"updateTime":1736177492000,"userId":3},{"createTime":1735660800000,"id":4,"img":"vultr_560_300.png","link":"https://www.vultr.com/?ref=9603742-8H","name":"Vultr","status":9,"txt":"Vultr送$100","type":1,"updateTime":1735660800000,"userId":3},{"createTime":1735660800000,"id":5,"img":"jdy_663_320.jpg","link":"https://3.cn/2ay1-e5t","name":"京东云","status":9,"txt":"京东云特惠专区","type":1,"updateTime":1735660800000,"userId":3},{"createTime":1735660800000,"id":6,"img":"new_ads.png","link":"https://www.iodraw.com/ads","name":"发布广告","status":9,"txt":"发布广告","type":1,"updateTime":1735660800000,"userId":3},{"createTime":1735660800000,"id":7,"img":"yun_910_50.png","link":"https://activity.huaweicloud.com/discount_area_v5/index.html?fromacct=261f35b6-af54-4511-a2ca-910fa15905d1&utm_source=aXhpYW95YW5nOA===&utm_medium=cps&utm_campaign=201905","name":"底部","status":9,"txt":"高性能云服务器2折起","type":2,"updateTime":1735660800000,"userId":3}]
<>(Pytorch:RuntimeError: Trying to backward through the graph a second time,
but the buffers have already been freed. Specify retain_graph=True when calling
backward the first time)
<>1. 具有多个loss值
retain_graph设置True,一般多用于两次backward
# 假如有两个Loss,先执行第一个的backward,再执行第二个backward loss1.backward(retain_graph=True) #
这样计算图就不会立即释放 loss2.backward() # 执行完这个后,所有中间变量都会被释放,以便下一次的循环 optimizer.step() #
更新参数
retain_graph设置True后一定要知道释放,否则显卡会占用越来越多,代码速度也会跑的越来越慢。
<>2. 但是,有的时候我明明仅有一个模型的也会出现这种错误
第一种是输入的原因。
// Example x = torch.randn((100,1), requires_grad = True) y = 1 + 2 * x + 0.3 *
torch.randn(100,1) x_train, y_train = x[:70], y[:70] x_val, y_val = x[70:], y[
70:] for epoch in range(n_epochs): ... prediction = model(x_train) loss.backward
() ...
在多次循环的过程中,input的梯度没有清除,而且我们也不需要计算输入的梯度,因此将x的require_grad设置为False就可以解决问题。
第二种是我在训练LSTM时候发现的。
class LSTMpred(nn.Module): def __init__(self, input_size, hidden_dim): self.
hidden= self.init_hidden() ... def init_hidden(self): #这里我们是需要个隐层参数的 return (
torch.zeros(1, 1, self.hidden_dim, requires_grad=True), torch.zeros(1, 1, self.
hidden_dim, requires_grad=True)) def forward(self, seq): ...
这里面的self.hidden我们在每一次训练的时候都要重新初始化隐层参数:
for epoch in range(Epoch): ... model.hidden = model.init_hidden() modout =
model(seq) ...
<>3. 总结
其实,想想这几种情况都是一回事,都是网络在反向传播中不允许多个backward(),也就是梯度下降反馈的时候,有多个循环过程中共用了同一个需要计算梯度的变量,在前一个循环清除梯度后,后面一个循环过程就会在这个变量上栽跟头(个人想法)。