代码之家  ›  专栏  ›  技术社区  ›  aze

TensorFlow:分析图像给出了Tensor'Placeholder:0'的无法馈送形状值(1296000,),它具有形状'(?,1296000)

  •  0
  • aze  · 技术社区  · 6 年前

    我正在使用tensorflow构建一个多层感知器网络,基于 Google

    重点是训练图像识别特定的模式 我使用的图像是1440*900,坐标指向这些模式(这可能不是最好的方法,但实际上只是一个测试)。

    运行代码时,出现以下错误:

    #python multilayer_perceptron.py
    
    WARNING:tensorflow:From multilayer_perceptron.py:119: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
    Instructions for updating:
    
    Future major versions of TensorFlow will allow gradients to flow
    into the labels input on backprop by default.
    
    See @{tf.nn.softmax_cross_entropy_with_logits_v2}.
    
    2018-06-07 11:29:14.144489: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
    current batch :  [195330 195330 195330 ... 155252 155252 155252] [ 90.5 312.5]
    Traceback (most recent call last):
      File "multilayer_perceptron.py", line 141, in <module>
        Y: batch_y})
      File "/usr/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run
        run_metadata_ptr)
      File "/usr/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1111, in _run
        str(subfeed_t.get_shape())))
    ValueError: Cannot feed value of shape (1296000,) for Tensor 'Placeholder:0', which has shape '(?, 1296000)'
    

    下面是我如何创建感知器的:

    # Network Parameters 
    n_hidden_1 = 10 #256 # 1st layer number of neurons
    n_hidden_2 = 10 #256 # 2nd layer number of neurons
    #each image has been flattened and converted to a 1-D numpy array of 1440*900
    n_input = INPUT_SIZE # img size
    n_classes = 2 # coordinates of where to click
    
    # tf Graph input
    X = tf.placeholder("float", [None, n_input]) #n_input is size of input which is 784 pixels
    Y = tf.placeholder("float", [None, n_classes])  # n_classes is size of output which is 10 digits here so why float and not bool ?
    
    # Store layers weight & bias
    weights = {
        'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
        'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
        'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
    }
    
    biases = {
        'b1': tf.Variable(tf.random_normal([n_hidden_1])),
        'b2': tf.Variable(tf.random_normal([n_hidden_2])),
        'out': tf.Variable(tf.random_normal([n_classes]))
    }
    
    
    # Create model
    def multilayer_perceptron(x):
        # Hidden fully connected layer with 256 neurons
        layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
        # Hidden fully connected layer with 256 neurons
        layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
        # Output fully connected layer with a neuron for each class
        out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
        return out_layer
    
    # Construct model
    logits = multilayer_perceptron(X)
    
    # Define loss and optimizer
    loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
        logits=logits, labels=Y))
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
    train_op = optimizer.minimize(loss_op)
    # Initializing the variables
    init = tf.global_variables_initializer()
    
    with tf.Session() as sess:
        sess.run(init)
    
        # Training cycle
        for epoch in range(training_epochs):
            avg_cost = 0.
            total_batch = int(len(train_data)/batch_size)
            # Loop over all batches
            for i in range(total_batch):
                #batch_x, batch_y = mnist.train.next_batch(batch_size)
                batch_x = train_data[i] # numpy.array of 1-D image
                batch_y = train_labels[i] # numpy.array of coords of where to click
                print("current batch : ", batch_x, batch_y)
                # Run optimization op (backprop) and cost op (to get loss value)
                _, c = sess.run([train_op, loss_op], feed_dict={X: batch_x,
                                                                Y: batch_y})
                # Compute average loss
                avg_cost += c / total_batch
            # Display logs per epoch step
            if epoch % display_step == 0:
                print("Epoch:", '%04d' % (epoch+1), "cost={:.9f}".format(avg_cost))
        print("Optimization Finished!")
        # Test model
        pred = tf.nn.softmax(logits)  # Apply softmax to logits
        correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(Y, 1))
        # Calculate accuracy
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
        print("Accuracy:", accuracy.eval({X: mnist.test.images, Y: mnist.test.labels}))
    

    我不知道这个错误意味着什么,也不知道如何解决它。

    1 回复  |  直到 6 年前
        1
  •  1
  •   nessuno    6 年前

    你在给一个元素喂食形状 (1296000) ,这是一维张量。 你的占位符(通常是每个tensorflow输入)是一批元素。

    因此,您必须为您的网络提供 (batch_size, X) 张量。如果一次只需要输入一个元素,可以使用numpy将张量重塑为所需的形状(对于标签张量,同样的推理也适用):

            _, c = sess.run([train_op, loss_op], feed_dict={
                             X: np.expand_dims(batch_x, axis=0),
                             Y: np.expand_dims(batch_y, axis=0)})