我有个问题。我用CPU和Tensorflow 1.14.0在本地机器上运行相同的代码。很好用。然而,当我用TensorFlow2.0在GPU上运行它时,我得到
CancelledError: [_Derived_]RecvAsync is cancelled. [[{{node Adam/Adam/update/AssignSubVariableOp/_65}}]] [[Reshape_13/_62]] [Op:__inference_distributed_function_3722]
Function call stack: distributed_function
可复制代码如下:
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
import matplotlib.pyplot as plt
%matplotlib inline
batch_size = 32
num_obs = 100
num_cats = 1
n_steps = 10
n_numerical_feats = 18
cat_size = 12
embedding_size = 1
labels = np.random.random(size=(num_obs*n_steps,1)).reshape(-1,n_steps,1)
print(labels.shape)
num_data = np.random.random(size=(num_obs*n_steps,n_numerical_feats))
print(num_data.shape)
features = num_data.reshape(-1,n_steps, n_numerical_feats)
print(features.shape)
cat_data = np.random.randint(0,cat_size,num_obs*n_steps)
print(cat_data.shape)
idx = cat_data.reshape(-1, n_steps)
print(idx.shape)
numerical_inputs = keras.layers.Input(shape=(n_steps, n_numerical_feats), name='numerical_inputs', dtype='float32')
cat_input = keras.layers.Input(shape=(n_steps,), name='cat_input')
cat_embedded = keras.layers.Embedding(cat_size, embedding_size, embeddings_initializer='uniform')(cat_input)
merged = keras.layers.concatenate([numerical_inputs, cat_embedded])
lstm_out = keras.layers.LSTM(64, return_sequences=True)(merged)
Dense_layer1 = keras.layers.Dense(32, activation='relu', use_bias=True)(lstm_out)
Dense_layer2 = keras.layers.Dense(1, activation='linear', use_bias=True)(Dense_layer1 )
model = keras.models.Model(inputs=[numerical_inputs, cat_input], outputs=Dense_layer2)
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
EPOCHS =5
history = model.fit([features, idx],
y = labels,
epochs=EPOCHS,
batch_size=batch_size)
有没有人有类似的问题?显然这是一个错误,但我不知道如何来,因为我想使用TensorFlow2.0。