根据你的评论:
我的数据是t-48,t-47,t-46,t-1作为过去的数据和
您可能不需要使用
TimeDistributed
resturn_sequences=True
LSTM层的参数。完成后,LSTM层将过去的输入时间序列编码为一个形状向量
(50,)
# make sure the labels have are in shape (num_samples, 12)
y = np.reshape(y, (-1, 12))
power_in = Input(shape=(X.shape[1:],))
power_lstm = LSTM(50, recurrent_dropout=0.4128,
dropout=0.412563,
kernel_initializer=power_lstm_init)(power_in)
main_out = Dense(12, kernel_initializer=power_lstm_init)(power_lstm)
或者,如果您想使用
时间分布的
并且考虑到输出本身是一个序列,我们可以在我们的模型中通过在密集层之前使用另一个LSTM层(添加一个
RepeatVector
# make sure the labels have are in shape (num_samples, 12, 1)
y = np.reshape(y, (-1, 12, 1))
power_in = Input(shape=(48,1))
power_lstm = LSTM(50, recurrent_dropout=0.4128,
dropout=0.412563,
kernel_initializer=power_lstm_init)(power_in)
rep = RepeatVector(12)(power_lstm)
out_lstm = LSTM(32, return_sequences=True)(rep)
main_out = TimeDistributed(Dense(1))(out_lstm)
model = Model(power_in, main_out)
model.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) (None, 48, 1) 0
_________________________________________________________________
lstm_3 (LSTM) (None, 50) 10400
_________________________________________________________________
repeat_vector_2 (RepeatVecto (None, 12, 50) 0
_________________________________________________________________
lstm_4 (LSTM) (None, 12, 32) 10624
_________________________________________________________________
time_distributed_1 (TimeDist (None, 12, 1) 33
=================================================================
Total params: 21,057
Trainable params: 21,057
Non-trainable params: 0
_________________________________________________________________
当然,在这两种模型中,您可能需要调整超参数(例如LSTM层的数量、LSTM层的尺寸等),以便能够准确地比较它们并获得良好的结果。
实际上,在您的场景中,您不需要使用
时间分布的
Dense layer is applied on the last axis
. 因此,
TimeDistributed(Dense(...))
Dense(...)
是等价的。