代码之家  ›  专栏  ›  技术社区  ›  singa1994

RuntimeError:梯度计算所需的一个变量已被就地操作修改:PyTorch error

  •  2
  • singa1994  · 技术社区  · 4 年前

    在第一次迭代中,鉴别器和生成器的向后操作都运行良好

    ....
    
    self.G_loss.backward(retain_graph=True)
    
    self.D_loss.backward()
    
    ...
    

    在第二次迭代时 self.G_loss.backward(retain_graph=True) 执行时,我得到以下错误:

    RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [8192, 512]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
    

    根据 torch.autograd.set_detect_anomaly ,鉴别器网络中下列最后一行负责此操作:

        bottleneck = bottleneck[:-1]
        self.embedding = x.view(x.size(0), -1)
        self.logit = self.layers[-1](self.embedding)
    

    完整错误:

        site-packages\torch\autograd\__init__.py", line 127, in backward
        allow_unreachable=True)  # allow_unreachable flag
    RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [8192, 512]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
    
    0 回复  |  直到 4 年前
        1
  •  0
  •   singa1994    4 年前

    通过使用删除代码来解决 loss += loss_val 线