代码之家  ›  专栏  ›  技术社区  ›  Cranjis

tensorflow中nd数组输入的占位符定义

  •  1
  • Cranjis  · 技术社区  · 7 年前

    我正试图基于以下指南构建LSTM RNN: http://monik.in/a-noobs-guide-to-implementing-rnn-lstm-using-tensorflow/ 我的输入是Ndaray,大小为89102*39(89102行,39个特征)。数据有3个标签-0,1,2

        data = tf.placeholder(tf.float32, [None, 1000, 39])
        target = tf.placeholder(tf.float32, [None, 3])
        cell = tf.nn.rnn_cell.LSTMCell(self.num_hidden)
    
        val, state = tf.nn.dynamic_rnn(cell, data, dtype=tf.float32)
        val = tf.transpose(val, [1, 0, 2])
        last = tf.gather(val, int(val.get_shape()[0]) - 1)
    
    
        weight = tf.Variable(tf.truncated_normal([self.num_hidden, int(target.get_shape()[1])]))
        bias = tf.Variable(tf.constant(0.1, shape=[target.get_shape()[1]]))
    
        prediction = tf.nn.softmax(tf.matmul(last, weight) + bias)
    
        cross_entropy = -tf.reduce_sum(target * tf.log(tf.clip_by_value(prediction, 1e-10, 1.0)))
    
        optimizer = tf.train.AdamOptimizer()
        minimize = optimizer.minimize(cross_entropy)
    
        mistakes = tf.not_equal(tf.argmax(target, 1), tf.argmax(prediction, 1))
        error = tf.reduce_mean(tf.cast(mistakes, tf.float32))
    
    
        init_op = tf.initialize_all_variables()
        sess = tf.Session()
        sess.run(init_op)
        batch_size = 1000
        no_of_batches = int(len(train_input) / batch_size)
        epoch = 5000
        for i in range(epoch):
            ptr = 0
            for j in range(no_of_batches):
                inp, out = train_input[ptr:ptr + batch_size], train_output[ptr:ptr + batch_size]
                ptr += batch_size
                sess.run(minimize, {data: inp, target: out})
            print( "Epoch - ", str(i))
    

    我会遇到以下错误:

    File , line 133, in execute_graph
    sess.run(minimize, {data: inp, target: out})
    
      File "/usr/local/lib/python3.5/dist-
    packages/tensorflow/python/client/session.py", line 789, in run
        run_metadata_ptr)
    
      File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 975, in _run
        % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
    
    ValueError: Cannot feed value of shape (1000, 39) for Tensor 'Placeholder:0', which has shape '(1000, 89102, 39)'
    

    知道是什么导致了这个问题吗?

    1 回复  |  直到 7 年前
        1
  •  2
  •   Miriam Farber    7 年前

    here 这个 dynamic_rnn 函数接受形状的批输入

    [batch_size, truncated_backprop_length, input_size]

    在您提供的链接中,占位符的形状是

    data = tf.placeholder(tf.float32, [None, 20,1]) 
    

    truncated_backprop_length=20 input_size=1 .
    3D 阵列:

    [
     array([[0],[0],[1],[0],[0],[1],[0],[1],[1],[0],[0],[0],[1],[1],[1],[1],[1],[1],[0],[0]]), 
     array([[1],[1],[0],[0],[0],[0],[1],[1],[1],[1],[1],[0],[0],[1],[0],[0],[0],[1],[0],[1]]), 
     .....
    ]
    

    train_input 是一个 2D 数组而非 3D 3D 大堆为了做到这一点,您需要决定要使用哪些参数 truncated_backprop_length input_size data 适当。

    例如,如果你想 截断的backprop\u长度 输入_大小 分别是39岁和1岁,你可以做

    import numpy as np
    train_input=np.reshape(train_input,(len(train_input),39,1))
    data = tf.placeholder(tf.float32, [None, 39,1]) 
    

    import tensorflow as tf
    import numpy as np
    num_hidden=5
    train_input=np.random.rand(89102,39)
    train_input=np.reshape(train_input,(len(train_input),39,1))
    train_output=np.random.rand(89102,3)
    
    data = tf.placeholder(tf.float32, [None, 39, 1])
    target = tf.placeholder(tf.float32, [None, 3])
    cell = tf.nn.rnn_cell.LSTMCell(num_hidden)
    
    val, state = tf.nn.dynamic_rnn(cell, data, dtype=tf.float32)
    val = tf.transpose(val, [1, 0, 2])
    last = tf.gather(val, int(val.get_shape()[0]) - 1)
    
    
    weight = tf.Variable(tf.truncated_normal([num_hidden, int(target.get_shape()[1])]))
    bias = tf.Variable(tf.constant(0.1, shape=[target.get_shape()[1]]))
    
    prediction = tf.nn.softmax(tf.matmul(last, weight) + bias)
    
    cross_entropy = -tf.reduce_sum(target * tf.log(tf.clip_by_value(prediction, 1e-10, 1.0)))
    
    optimizer = tf.train.AdamOptimizer()
    minimize = optimizer.minimize(cross_entropy)
    
    mistakes = tf.not_equal(tf.argmax(target, 1), tf.argmax(prediction, 1))
    error = tf.reduce_mean(tf.cast(mistakes, tf.float32))
    
    
    init_op = tf.initialize_all_variables()
    sess = tf.Session()
    sess.run(init_op)
    batch_size = 1000
    no_of_batches = int(len(train_input) / batch_size)
    epoch = 5000
    for i in range(epoch):
        ptr = 0
        for j in range(no_of_batches):
            inp, out = train_input[ptr:ptr + batch_size], train_output[ptr:ptr + batch_size]
            ptr += batch_size
            sess.run(minimize, {data: inp, target: out})
        print( "Epoch - ", str(i))