代码之家  ›  专栏  ›  技术社区  ›  bremen_matt

同一会话中的多个顺序Tensorflow操作。run()调用

  •  2
  • bremen_matt  · 技术社区  · 7 年前

    正如标题所示,我想在同一个数据库中运行多个Tensorflow操作 Session.run() 呼叫具体来说,为了使问题更具体,假设我想在一个调用中运行多个训练迭代。

    使用多个 一场运行() 通话内容如下:

    # Declare the function that we want to minimize
    func = ...
    
    # Create the optimizer which will perform a single optimization iteration
    optimizer = tf.train.AdamOptimizer().minimize(func)
    
    # Run N optimization iterations
    N = 10
    with tf.Session() as sess:
    
        sess.run( tf.global_variables_initializer() )
        for i in range(N):
            sess.run( optimizer )
    

    然而,这当然会有一些开销,因为我们正在进行多个会话调用。我假设我们可以通过某种方式对操作进行分组来消除一些重要的开销。我假设 group count_up_to 是我应该使用的,但我找不到任何示例来演示如何在这种情况下使用它们。谁能给我指一下正确的方向吗?

    最终目标是定义一些可以运行的复合操作 N 在单个调用中进行迭代,以便将上述内容转换为如下内容:

    # Declare the function that we want to minimize
    func = ...
    
    # Create the optimizer which will perform a single optimization iteration
    optimizer = tf.train.AdamOptimizer().minimize(func)
    
    # Create the compound operation that will run the optimizer 10 times
    optimizeNIterations = ?????
    with tf.Session() as sess:
    
        sess.run( tf.global_variables_initializer() )
        sess.run( optimizeNIterations )
    

    编辑::

    musically_ut 指出,我确实可以通过强制问题使用提要词典将操作链接在一起。但这似乎是一个非常具体问题的解决方案。我最关心的是如何在单个会话运行中按顺序执行操作。我可以再举一个例子,为什么你会想要这个。。。。

    假设现在除了要运行优化器之外,我还想检索优化的值,假设这些值位于变量中 X . 如果我想优化并获得优化值,我可以尝试这样做

    with tf.Session() as sess:
    
        sess.run( tf.global_variables_initializer() )
        o, x = sess.run( [ optimizer, X ] )
    

    但事实上,这不会起作用,因为操作(optimizer,X)不是按顺序运行的。我基本上需要进行两次会话呼叫:

    with tf.Session() as sess:
    
        sess.run( tf.global_variables_initializer() )
        o = sess.run( optimizer )
        x = sess.run( X )
    

    问题是如何将这两个调用组合成一个调用。

    1 回复  |  直到 7 年前
        1
  •  3
  •   musically_ut    7 年前

    听起来你可以把任何你想多次运行的操作放在 tf.while_loop . 如果操作是独立的,则可能需要设置 parallel_iterations 1 或者(更好的)使用控件依赖项对优化器调用进行排序。例如:

    import tensorflow as tf
    
    with tf.Graph().as_default():
      opt = tf.train.AdamOptimizer(0.1)
      # Use a resource variable for a true "read op"
      var = tf.get_variable(name="var", shape=[], use_resource=True)
      def _cond(i, _):
        return tf.less(i, 20)  # 20 iterations
      def _body(i, sequencer):
        with tf.control_dependencies([sequencer]):
          loss = .5 * (var - 10.) ** 2
          print_op = tf.Print(loss, ["Evaluating loss", i, loss])
        with tf.control_dependencies([print_op]):
          train_op = opt.minimize(loss)
        with tf.control_dependencies([train_op]):
          next_sequencer = tf.ones([])
        return i + 1, next_sequencer
      initial_value = var.read_value()
      with tf.control_dependencies([initial_value]):
        _, sequencer = tf.while_loop(cond=_cond, body=_body, loop_vars=[0, 1.])
      with tf.control_dependencies([sequencer]):
        final_value = var.read_value()
      init_op = tf.global_variables_initializer()
      with tf.Session() as session:
        session.run([init_op])
        print(session.run([initial_value, final_value]))
    

    打印:

    2017-12-21 11:40:35.920035: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][0][46.3987083]
    2017-12-21 11:40:35.920317: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][1][45.4404]
    2017-12-21 11:40:35.920534: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][2][44.4923515]
    2017-12-21 11:40:35.920715: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][3][43.55476]
    2017-12-21 11:40:35.920905: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][4][42.6277695]
    2017-12-21 11:40:35.921084: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][5][41.711544]
    2017-12-21 11:40:35.921273: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][6][40.8062363]
    2017-12-21 11:40:35.921426: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][7][39.9120026]
    2017-12-21 11:40:35.921578: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][8][39.028965]
    2017-12-21 11:40:35.921732: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][9][38.1572723]
    2017-12-21 11:40:35.921888: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][10][37.2970314]
    2017-12-21 11:40:35.922053: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][11][36.4483566]
    2017-12-21 11:40:35.922187: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][12][35.6113625]
    2017-12-21 11:40:35.922327: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][13][34.7861366]
    2017-12-21 11:40:35.922472: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][14][33.9727631]
    2017-12-21 11:40:35.922613: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][15][33.1713257]
    2017-12-21 11:40:35.922777: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][16][32.3818779]
    2017-12-21 11:40:35.922942: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][17][31.6044941]
    2017-12-21 11:40:35.923115: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][18][30.8392067]
    2017-12-21 11:40:35.923253: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][19][30.0860634]
    [0.36685812, 2.3390481]